Thursday, December 22, 2011

The Higgs Boson -- and the value of money

What does the (tentatively discovered) Higgs Boson have to do with finance? Nothing. At least nothing obvious. Still, it's a fascinating topic with grand sweeping themes. I've written an essay on it for Bloomberg Views, which will appear later tonight or tomorrow. I'll add the link when it does.

The interesting thing to me about the Higgs Boson, and the associated Higgs field (the boson is an elementary excitation of this field), is the intellectual or conceptual history of the idea. It seems crazy to think that as the universe cooled (very shortly after the big bang) a new field, the Higgs field, suddenly appeared, filling all space, and giving particles mass in proportion to how strongly they interact with that field. It would be a crazy idea if it were just a proposal pulled out of thin air. But the history is that Higgs' work (and the work of many others at the same time, the early 1960s) had very strong stimulation from the BCS theory of superconductivity of ordinary metals, which appeared in 1957.

That theory explained how superconductivity originates through the emergence below a critical temperature of a condensate of paired electrons (hence, bosons) which acts as an extremely sensitive electromagnetic medium. Try to impose a magnetic field inside a superconductor (by bringing a magnet close, for example) and this condensate or field will respond by stirring up currents which act precisely to cancel the field inside the superconductor. This is the essence of superconductivity -- its appearance changes physics inside the superconductor in such a way that electromagnetic fields cannot propagate. In quantum terms (from quantum electrodynamics), this is equivalent to saying that the photon -- the carrier of the electromagnetic fields -- comes to have a mass. It does so because it interacts very strongly with the condensate.

This idea from superconductivity is pretty much identical to the Higgs mechanism for giving the W and Z particles (the carriers of the weak force) mass. This is what I think is fascinating. The Higgs prediction arose not so much from complex mathematics, but from the use of analogy and metaphor -- I wonder if the universe is in some ways like a superconductor? If we're living in a superconductor (not for ordinary electrical charge, but for a different kind of charge of the electroweak field), then it's easy to understand why the W and Z particles have big masses (more than 100 times the mass of the proton). They're just like photons traveling inside an ordinary superconductor -- inside an ordinary metal, lead or tin or aluminum, cooled down to low temperatures.

I think it's fitting that physics theory so celebrated for bewildering mathematics and abstraction beyond ordinary imagination actually has its roots in the understanding of grubby things like magnets and metals. That's where the essential ideas were born and found their initial value.

Having said that none of this has anything to do with finance, however, I should mention a fascinating proposal from 2000 by Per Bak, Simon Nørrelykke and Martin Shubik, which draws a very close analogy between the process which determines the value of money and any Higgs-like mechanism. They made the observation that the absolute value of money is essentially undetermined:
The value of money represents a “continuous symmetry”. If, at some point, the value of money was globally redefined by a certain factor, this would have no consequences whatsoever. Thus, in order to arrive at a specific value of money, the continuous symmetry must be broken.
In other words, a loaf of bread could be worth $1, $10, or $100 -- it doesn't matter. But here and now in the real world it does have one specific value. The symmetry is broken.

This idea of continuous symmetry is something that arises frequently in physics. And it is indeed the breaking of a continuous symmetry that underlies the onset of superconductivity. The mathematics of field theory shows that, anytime a continuous symmetry is broken (so that some variables comes to take on one specific value), there appears in the theory a new dynamical mode -- a so-called Goldstone Mode -- corresponding to fluctuations along the direction of the continuous symmetry. This isn't quite the appearance of mass -- that takes another step in the mathematics, but this Goldstone business is a part of the Higgs mechanism.

I'll try to return to this paper again. It offers a seemingly plausible dynamical model for how a value of money can emerge in an economy, and also why it should be subject to strong inherent fluctuations (because of the Goldstone mode). None of this comes out of equilibrium theory, nor should one expect it to as money is an inherently dynamical thing -- we use it as a tool to manage activities through time, selling our services today to buy food next week, for example.

Tuesday, December 13, 2011

a little more on power laws

I wanted to respond to several insightful comments on my recent post on power laws in finance. And, after that, pose a question on the economics/finance history of financial time series that I hope someone out there might be able to help me with.

First, comments:

ivansml said...
Why exactly is power-law distribution for asset returns inconsistent with EMH? It is trivial to write "standard" economic model where returns have fat tails, e.g. if we assume that stochastic process for dividends / firm profits has fat tails. That of course may not be very satisfactory explanation, but it still shows that EMH != normal distribution. In fact, Fama wrote about non-gaussian returns back in 1960's (and Mandelbrot before him), so the idea is not exactly new. The work you describe here is certainly useful and interesting, but pure patterns in data (or "stylized facts", as economists would call them) by themselves are not enough - we need some theory to make sense of them, and it would be interesting to hear more about contributions from econophysics in that area.
James Picerno said...
It's also worth pointing out that EMH, as I understand it, doesn't assume or dismiss that returns follow some specific distribution. Rather, EMH simply posits that prices reflect known information. For many years, analysts presumed that EMH implies a random distribution, but the empirical record says otherwise. But the random walk isn't a condition of EMH. Andrew Lo of MIT has discussed this point at length. The market may or may not be efficient, but it's not conditional on random price fluctuations. Separately, ivansmi makes a good point about models. You need a model to reject EMH. But that only brings you so far. Let's say we have a model of asset pricing that rejects EMH. Then the question is whether EMH or the model is wrong? That requires another model. In short, it's ultimately impossible to reject or accept EMH, unless of course you completely trust a given model. But that brings us back to square one. Welcome to economics.
I actually agree with these statements. Let me try to clarify. In my post I said, referring to the fat tails in returns and 1/t decay of volatility correlations, that  "None of these patterns can be explained by anything in the standard economic theories of markets (the EMH etc)." The key word is of course "explained."

The EMH has so much flexibility and is so loosely linked to real data that it is indeed consistent with these observations, as Ivansml (Mark) and James rightly point out. I think it is probably consistent with any conceivable time series of prices. But "being consistent with" isn't a very strong claim, especially if the consistency comes from making further subsidiary assumptions about how these fat tails might come from fluctuations in fundamental values. This seems like a "just so" story (even if the idea that fluctuations in fundamental values could have fat tails is not at all preposterous).

The point I wanted to make is that nothing (that I know of) in traditional economics/finance (i.e. coming out of the EMH paradigm) gives a natural and convincing explanation of these statistical regularities. Such an explanation would start from simple well accepted facts about the behaviour of individuals, firms, etc., market structures and so on, and then demonstrate how -- because of certain logical consequences following from these facts and their interactions -- we should actually expect to find just these kinds of power laws, with the same exponents, etc., and in many different markets. Reading such an explanation, you would say "Oh, now I see where it comes from and how it works!"

To illustrate some possibilities, one class of proposed explanations sees large market movements as having inherently collective origins, i.e. as reflecting large avalanches of trading behaviour coming out of the interactions of market participants. Early models in this class include the famous Santa Fe Institute Stock Market model developed in the mid 1990s. This nice historical summary by Blake LeBaron explores the motivations of this early agent-based model, the first of which was to include a focus on the interactions among market participants, and so go beyond the usual simplifying assumptions of standard theories which assume interactions can be ignored. As LeBaron notes, this work began in part...
... from a desire to understand the impact of agent interactions and group learning dynamics in a financial setting. While agent-based markets have many goals, I see their first scientific use as a tool for understanding the dynamics in relatively traditional economic models. It is these models for which economists often invoke the heroic assumption of convergence to rational expectations equilibrium where agents’ beliefs and behavior have converged to a self-consistent world view. Obviously, this would be a nice place to get to, but the dynamics of this journey are rarely spelled out. Given that financial markets appear to thrive on diverse opinions and behavior, a first level test of rational expectations from a heterogeneous learning perspective was always needed.   
I'm going to write posts on this kind of work soon looking in much more detail. This early model has been greatly extended and had many diverse offspring; a more recent review by LeBaron gives an updated view. In many such models one finds the natural emergence of power law distributions for returns, and also long-term correlations in volatility. These appear to be linked to various kinds of interactions between participants. Essentially, the market is an ecology of interacting trading strategies, and it has naturally rich dynamics as new strategies invade and old strategies, which had been successful, fall into disuse. The market never settles into an equilibrium, but has continuous ongoing fluctuations.

Now, these various models haven't yet explained anything, but they do pose potentially explanatory mechanisms, which need to be tested in detail. Just because these mechanisms CAN produce the right numbers doesn't mean this is really how it works in markets. Indeed, some physicists and economists working together have proposed a very different kind of explanation for the power law with exponent 3 for the (cumulative) distribution of returns which links it to the known power law distribution of the wealth of investors (and hence the size of the trades they can make). This model sees large movements as arising in the large actions of very wealthy market participants. However, this is more than merely attributing the effect to unknown fat tails in fundamentals, as would be the case with EMH based explanations. It starts with empirical observations of tail behaviour in several market quantities and argues that these together imply what we see for market returns.

There are more models and proposed explanations, and I hope to get into all this in some detail soon. But I hope this explains a little why I don't find the EMH based ideas very interesting. Being consistent with these statistical regularities is not as interesting as suggesting clear paths by which they arise.

Of course, I might make one other point too, and maybe this is, deep down, what I find most empty about the EMH paradigm. It essentially assumes away any dynamics in the market. Fundamentals get changed by external forces and the theory supposes that this great complex mass of heterogenous humanity which is the market responds instantaneously to find the new equilibrium which incorporates all information correctly. So, it treats the non-market part of the world -- the weather, politics, business, technology and so on -- as a rich thing with potentially complicated dynamics. Then it treats the market as a really simply dynamical thing which just gets driven in slave fashion by the outside. This to me seems perversely unnatural and impossible to take seriously. But it is indeed very difficult to rule out with hard data. The idea can always be contorted to remain consistent with observations.

Finally, another valuable comment:
David K. Waltz said...
In one of Taleeb's books, didn't he make mention that something cannot be proven true, only disproven? I think it was the whole swan thing - if you have an appropriate sample and count 100% white swans does not prove there are ONLY white swans, while a sample that has a black one proves that there are not ONLY white swans.
Again, I agree completely. This is a basic point about science. We don't ever prove a theory, only disprove it. And the best science works by trying to find data to disprove a hypothesis, not by trying to prove it.

I assume David is referring to my discussion of the empirical cubic power law for market returns. This is indeed a tentative stylized fact which seems to hold with appreciable accuracy in many markets, but there may well be markets in which it doesn't hold (or periods in which the exponent changes). Finding such deviations  would be very interesting as it might offer further clues as to the mechanism behind this phenomenon.

NOW, for the question I wanted to pose. I've been doing some research on the history of finance, and there's something I can't quite understand. Here's the problem:

1. Mandelbrot in the early 1960s showed that market returns had fat tails; he conjectured that they fit the so-called Stable Paretian (now called Stable Levy) distributions which have power law tails. These have the nice property (like the Gaussian) that the composition of the returns for longer intervals, built up from component Stable Paretian distributions, also has the same form. The market looks the same at different time scales.
2. However, Mandelbrot noted in that same paper a shortcoming of his proposal. You can't think of returns as being independent and identically distributed (i.i.d.) over different time intervals because the volatility clusters -- high volatility predicts more to follow, and vice versa. We don't just have an i.i.d. process.
3. Lots of people documented volatility clustering over the next few decades, and in the 1980s Robert Engle and others introduced ARCH/GARCH and all that -- simple time series models able to reproduce the realistic properties of financial times, including volatility clustering.
4. But today I found several papers from the 1990s (and later) still discussing the Stable Paretian distribution as a plausible model for financial time series.

My question is simply -- why was anyone even 20 years ago still writing about the Stable Paretian distribution when the reality of volatility clustering was so well known? My understanding is that this distribution was proposed as a way to save the i.i.d. property (by showing that such a process can still create market fluctuations having similar character on all time scales). But volatility clustering is enough on its own to rule out any i.i.d. process.

Of course, the Stable Paretian business has by now been completely ruled out by empirical work establishing the value of the exponent for returns, which is too large to be consistent with such distributions. I just can't see why it wasn't relegated to the history books long before.

The only possibility, it just dawns on me, is that people may have thought that some minor variation of the original Mandelbrot view might work best. That is, let the distribution over any interval be Stable Paretian, but let the parameters vary a little from one moment to the next. You give up the i.i.d. but might still get some kind of nice stability properties as short intervals get put together into longer ones. You could put Mandelbrot's distribution into ARCH/GARCH rather than the Gaussian. But this is only a guess. Does anyone know?

Friday, December 9, 2011

Prosecuting Wall St.

By way of Simolean Sense:
The following is a script of "Prosecuting Wall Street" (CBS) which aired on Dec. 4, 2011. Steve Kroft is correspondent, James Jacoby, producer.

It's been three years since the financial crisis crippled the American economy, and much to the consternation of the general public and the demonstrators on Wall Street, there has not been a single prosecution of a high-ranking Wall Street executive or major financial firm even though fraud and financial misrepresentations played a significant role in the meltdown. We wanted to know why, so nine months ago we began looking for cases that might have prosecutorial merit. Tonight you'll hear about two of them. We begin with a woman named Eileen Foster, a senior executive at Countrywide Financial, one of the epicenters of the crisis.

Steve Kroft: Do you believe that there are people at Countrywide who belong behind bars?

Eileen Foster: Yes.

Kroft: Do you want to give me their names?

Foster: No.

Kroft: Would you give their names to a grand jury if you were asked?

Foster: Yes.

But Eileen Foster has never been asked - and never spoken to the Justice Department - even though she was Countrywide's executive vice president in charge of fraud investigations...
See the video and transcript here.

Tuesday, December 6, 2011

Power laws in finance

My latest column in Bloomberg looks very briefly at some of the basic mathematical patterns we know about in finance. Science has a long tradition of putting data and observation first. Look very carefully at what needs to be explained -- mathematical patterns that show up consistently in the data -- and then try to build simple models able to reproduce those patterns in a natural way.

This path has great promise in economic finance, although it hasn't been pursued very far until recently. My Bloomberg column gives a sketch of what is going on, but I'd like to give a few more details here and some links.

The patterns we find in finance are statistical regularities -- broad statistical patterns which show up in all markets studied, with an impressive similarity across markets in different countries and for markets in different instruments. The first regularity is the distribution of returns over various time intervals, which has been found generically to have broad power law tails -- "fat tails" -- implying that large fluctuations up or down are much more likely than they would be if markets fluctuated in keeping with normal Gaussian statistics. Anyone who read The Black Swan knows this.

This pattern has been established in a number of studies over the past 15 years or so, mostly by physicist Eugene Stanley of Boston University and colleagues. This paper from 1999 is perhaps the most notable, as it used enormous volumes of historical data to establish the fat tailed pattern for returns over times ranging from one minute up to about 4 days. One of the most powerful things about this approach is that it doesn't begin with any far reaching assumptions about human behaviour, the structure of financial markets or anything else, but only asks -- are there patterns in the data? As the authors note:
The most challenging difficulty in the study of a financial market is that the nature of the interactions between the different elements comprising the system is unknown, as is the way in which external factors affect it. Therefore, as a starting point, one may resort to empirical studies to help uncover the regularities or “empirical laws” that may govern financial markets.    
This strategy seems promising to physicists because it has worked in building theories of complex physical systems -- liquids, gases, magnets, superconductors -- for which it is also often impossible to know anything in great detail about the interactions between the molecules and atoms within. This hasn't prevented the development of powerful theories because, as it turns out, many of the precise details at the microscopic level DO NOT influence the large scale collective properties of the system. This has inspired physicists to think that the same may be true in financial markets -- at least some of the collective behaviour we see in markets, their macroscopic behaviour, may be quite insensitive to details about human decision making, market structure and so on.

The authors of this 1999 study summarized their findings as follows:


Several points of clarification. First, the result for the power law with exponent close to 3 is a result for the cumulative distribution. That is, the probability that a return will be greater than a certain value (not just equal to that value). Second, the fact that this value lies outside of the range [0,2] means that the process generating these fluctuations isn't a simple stationary random process with an identical and independent distribution for each time period. This was the idea initially proposed by Benoit Mandelbrot on the basis of the so-called Levy Stable distributions. This study and others have established that this idea can't work -- something more complicated is going on.

That complication is also referred to in the second paragraph above. If you take the data on returns at the one minute level, and randomize the order in which it appears, then you still get the same power law tails in the distribution of returns over one minute. That's the same data. But this new time series has different returns over longer times, generated by combining sequences of the one minute returns. The distribution over longer and longer times turns out to converge slowly to a Gaussian for the randomized data, meaning that the true fat tailed distribution over longer times has its origin in some rich and complex correlations in market movements at different times (which gets wiped out by the randomization). Again, we're not just dealing with a fixed probability distribution and independent changes over different intervals.

To read more about this, see this nice review by Xavier Gabaix of MIT. It covers this and many other power laws in finance and economics.

Now, the story gets even more interesting if you look past the mere distribution of returns and study the correlations between market movements at different times. Market movements are, of course, extremely hard to predict. But it is very interesting where the unpredictability comes in.

The so-called autocorrelation of the time series of market returns decays to zero after a few minutes. This is essentially a measure of how much the return now can be used to predict a return in the future. After a few minutes, there's nothing. This is the sense in which the markets are unpredictable. However, there are levels of predictability. It was discovered in the early 1990s, and has been confirmed many times since in different markets, that the time series of volatility -- the absolute value of the market return -- has long-term correlations, a kind of long-term memory. Technically, the autocorrelation of this time series only decays to zero very slowly.

This is shown below in the following figure (from a representative paper, again from the Boston University group) which shows the autocorrelation of the return time series g(t) and also of the volatility, which is the absolute value of g(t):



Clearly, whereas the first signal shows no correlations after about 10 minutes, the second shows correlations and predictability persisting out to times as long as 10,000 minutes, which is on the order of 10 days or so.

So, its the directionality of price movements which has very little predictability, whereas the magnitude of changes follows a process with much more interesting structure. It is in the record of this volatility that one sees potentially deep links to other physical processes, including earthquakes. A particularly interesting paper is this one, again by the Boston group, quantifying several ways in which market volatility obeys several quantitative laws known from earthquake science, especially the Omori Law describing how the probability of aftershocks decays following a main earthquake. This probability decays quite simply in proportion to 1/time since the main quake, meaning that aftershocks are most likely immediately afterward, and become progressively less likely with time. Episodes of high volatility appear to follow similar behaviour quite closely.

Perhaps even better is another study, which looks at the link to earthquakes with a somewhat tighter focus. The abstract captures the content quite well:
We analyze the memory in volatility by studying volatility return intervals, defined as the time between two consecutive fluctuations larger than a given threshold, in time periods following stock market crashes. Such an aftercrash period is characterized by the Omori law, which describes the decay in the rate of aftershocks of a given size with time t by a power law with exponent close to 1. A shock followed by such a power law decay in the rate is here called Omori process. We find self-similar features in the volatility. Specifically, within the aftercrash period there are smaller shocks that themselves constitute Omori processes on smaller scales, similar to the Omori process after the large crash. We call these smaller shocks subcrashes, which are followed by their own aftershocks. We also show that the Omori law holds not only after significant market crashes as shown by Lillo and Mantegna [Phys. Rev. E 68, 016119 2003], but also after “intermediate shocks.” ...
These are only a few of the power law type regularities now known to hold for most markets, with only very minor differences between markets. An important effort is to find ways to explain these regularities in simple and plausible market models. None of these patterns can be explained by anything in the standard economic theories of markets (the EMH etc). They can of course be reproduced by suitably generating time series using various methods, but that hardly counts as explanation -- that's just using time series generators to reproduce certain kinds of data.

The promise of finding these kinds of patterns is that they may strongly constrain the types of theories to be considered for markets, by ruling out all those which do not naturally give rise to this kind of statistical behaviour. This is where data matters most in science -- by proving that certain ideas, no matter how plausible they seem, don't work. This data has already stimulated the development of a number of different avenues for building market theories which can explain the basic statistics of markets, and in so doing go well beyond the achievements of traditional economics.

I'll have more to say on that in the near future.

Friday, December 2, 2011

Interview with Dave Cliff

Dave Cliff of the University of Bristol is someone whose work I've been meaning to look at much more closely for a long time. Essentially he's an artificial intelligence expert, but has has devoted some of his work to developing trading algorithms. He suggests that many of these algorithms, even one working on extremely simple rules, consistently outperform human beings, which rather undermines the common economic view that people are highly sophisticated rational agents.

I just noticed tht Moneyscience is beginning a several part interview with Cliff, the first part having just appeared. I'm looking forward to the rest. Some highlights from Part I, beginning with Cliff's early work, mid 1990s, on writing algorithms for trading:
I wrote this piece of software called ZIP, Zero Intelligence Plus. The intention was for it to be as minimal as possible, so it is a ridiculously simple algorithm, almost embarrassingly so. It’s essentially some nested if-then rules, the kind of thing that you might type into an Excel spreadsheet macro. And this set of decisions determines whether the trader should increase or decrease a margin. For each unit it trades, has some notion of the price below which it shouldn’t sell or above which it shouldn’t buy and that is its limit price. However, the price that it actually quotes into the market as a bid or an offer is different from the limit price because obviously, if you’ve been told you can buy something and spend no more than ten quid, you want to start low and you might be bidding just one or two pounds. Then gradually, you’ll approach towards the ten quid point in order to get the deal, so with each quote you’re reducing the margin on the trade.  The key innovation I introduced in my ZIP algorithm was that it learned from its experience. So if it made a mistake, it would recognize that mistake and be better the next time it was in the same situation.

HFTR: When was this exactly?

DC: I did the research in 1996 and HP published the results, and the ZIP program code, in 1997. I then went on to do some other things, like DJ-ing and producing algorithmic dance music (but that’s another story!)

Fast-forward to 2001, when I started to get a bunch of calls because a team at IBM’s Research Labs in the US had just completed the first ever systematic experimental tests of human traders competing against automated, adaptive trading systems. Although IBM had developed their own algorithm called MGD, (Modified Gjerstad Dickhaut), it did the same kind of thing as my ZIP algorithm, using different methods. They had tested out both their MGD and my ZIP against human traders under rigorous experimental conditions and found that both algorithms consistently beat humans, regardless of whether the humans or robots were buyers or sellers. The robots always out-performed the humans.

IBM published their findings at the 2001 IJCAI conference (the International Joint Conference on AI) and although IBM are a pretty conservative company, in the opening paragraphs of this paper they said that this was a result that could have financial implications measured in billions of dollars. I think that implicitly what they were saying was there will always be financial markets and there will always be the institutions (i.e. hedge funds, pension management funds, banks, etc). But the traders that do the business on behalf of those institutions would cease to be human at some point in the future and start to be machines. 
Personally, I think there are two important things here. One is that, yes, trading will probably soon become almost all algorithmic. This may tend to make you think the markets will become more mechanical, their collective behaviour emerging out of the very simple actions of so many crude programs.

But the second thing is what this tells us about people -- that traders and investors and people in general aren't so clever or rational, and most of them have probably been following fairly simple rules all along, rules that machines can easily beat. So there's really no reason to think the markets should become more mechanical as they become more algorithmic. They've probably been quite mechanical all along, and algorithmic too -- it's just that non-rational zero intelligence automatons running the algorithms were called people.