Prof. Jayanth R. Varma's Financial Markets Blog

Photograph About
Prof. Jayanth R. Varma's Financial Markets Blog, A Blog on Financial Markets and Their Regulation

© Prof. Jayanth R. Varma
jrvarma@iimahd.ernet.in

Subscribe to a feed
RSS Feed
Atom Feed
RSS Feed (Comments)

March
Sun Mon Tue Wed Thu Fri Sat
       
2012
Months
Mar
2011
Months

Powered by Blosxom

Thu, 29 Mar 2012

Pricing of liquidity

Prior to the crisis, liquidity risk was under priced and even ignored. Now, the pendulum has swung to the other extreme, but the result may once again be that liquidity is mispriced.

The Financial Stability Institute set up by the Bank for International Settlements and the Basel Committee on Banking Supervision has published a paper “Liquidity transfer pricing: a guide to better practice” by Joel Grant of the Australian Prudential Regulation Authority. The paper argues that a matched maturity transfer pricing method based on the swap yield curve does not price liquidity at all:

These banks came to view funding liquidity as essentially free, and funding liquidity risk as essentially zero. ... If we assume that interest rate risk is properly accounted for using the swap curve, then a zero spread above the swap curve implies a zero charge for the cost of funding liquidity.

I find myself in total disagreement with this assertion. The standard liquidity preference theory of the term structure says that the long term interest rate is equal to the expected average short term interest rate plus a liquidity premium. So matched maturity transfer pricing does price liquidity. If you accept the market liquidity premium as correct, then one can go further and say that the swap based approach prices liquidity perfectly; but I do not wish to push the argument that far. I would only say that Grant’s argument would hold true only under the pure expectations theory of the term structure, and in this case, the entire market is, by definition, placing a zero price on liquidity.

The paper argues that matched maturity transfer pricing must be based on the bank’s borrowing yield curve – the bank’s fixed rate borrowing cost is converted into a floating rate cost (using an “internal swap”) and the spread of this floating rate borrowing cost over the swap yield curve is treated as a liquidity premium. I believe that the error in this prescription is that it conflates credit and liquidity risk. The spread above the swap curve reflects the term structure of the bank’s default risk. Grant seems to recognize this, but then he ignores the problem:

This ... reflects both idiosyncratic credit risks and market access premiums and is considered to be a much better measure of the cost of liquidity.

I believe that there is a very big problem in including the bank’s default risk premium in pricing the assets that the bank is holding. The problem is that the bank’s default risk depends on the asset quality of the bank. Transfer pricing based on this yield curve can thus set up up a vicious circle that turns a healthy bank into a toxic bank. A high transfer price of funds means that the bank is priced out of the market for low risk assets and the bank ends up with higher risk assets. The higher risk profile of the bank increases its borrowing cost and therefore its transfer price. This pushes the bank into even more risky assets and the vicious circle continues until the bank fails or is bailed out.

This problem is well known even in corporate finance where a firm is engaged in many different lines of business. There the solution is to use a divisional cost of capital which ignores the risk of the company as a whole and focuses on the risk of the division in question. The use of a corporate cost of capital in diversified companies leads to the lower risk businesses being starved of funds while the high risk businesses are allowed to grow. Ultimately, the corporate cost of capital also rises. Divisional cost of capital solves this problem.

It would be very odd if a regulatory guide to best practice ignores all this learning and pushes banks in the wrong direction. We should not lose sight of the simple principle that assets must be priced based on the characteristics of the asset and not the characteristics of the owner of the asset.

Posted at 15:34 on Thu, 29 Mar 2012     View/Post Comments (2)     permanent link


Fri, 23 Mar 2012

Globalized Finance

It is interesting to find a well known G-SIFI (Global Systemically Important Financial Institution) being described as:

a London based hedge fund, headed by a rajestani, masquerading as a German bank

In all fairness, the description is perhaps partly facetious and in any case, I doubt whether this G-SIFI is either as globalized or as important as the Rothschilds (another Anglo-German combination) were in their heyday.

If you are keen to verify your guess of the identity of the G-SIFI in question, goto this dealbreaker.com story, scroll down to the comments, and read the comment of Edmond Dantes, from which the above quote is taken.

Posted at 15:07 on Fri, 23 Mar 2012     View/Post Comments (0)     permanent link


Thu, 15 Mar 2012

Reviving structural models: Pirrong tackles commodity price dynamics

The last quarter century has seen the slow death of structural models in finance and the relentless rise of reduced form models. I have argued that this leads to models that are “over-calibrated to markets and under-grounded in fundamentals”, and was therefore quite happy to see Craig Pirrong revive structural models with his recent book on Commodity Price Dynamics.

Ironically, it was a paper based on a structural model that made it possible to jettison structural models. The 1985 paper by Cox, Ingersoll and Ross (“An Intertemporal General Equilibrium Model of Asset Prices”, Econometrica, 1985, 53(2), 363-384) took a structural model of a very simple economy and showed that asset prices must equal discounted values of the asset payoff after making a risk adjustment in the drift term of the dynamics of the state variables. This was a huge advance because it became possible for modellers to simply assume a set of relevant state variables, calibrate the drift adjustments (risk premia) to other market prices, and value derivatives without any direct reference to fundamentals at all.

Over time, reduced form models swept through the whole of finance. Structural (Merton) models of credit risk were replaced by reduced form models. Structural models of the yield curve (based on the mean reversion and other dynamics of the short rate) were replaced by the Libor Market Model (LMM). In commodity price modelling, fundamentals were swept aside, and replaced by an unobservable quantity called the convenience yield.

All this was useful and perhaps necessary because the reduced form models were eminently tractable and could be made to fit market prices quite closely. By contrast, structural models were either intractable or too oversimplified to fit market prices well enough. Yet, there is reason to worry that the use of reduced form models has gone beyond the point of diminishing returns. It is worth trying to reconnect the models to fundamentals.

This is what Pirrong is trying to do in the context of commodity prices. What he has done is to abandon the idea of closed form solutions and rely on computing power to solve the structural models numerically. I believe this a very promising idea though Pirrong’s approach stretches computing feasibility to its limits.

Pirrong regards the spot commodity price to be a function of one state variable (inventory denoted x) and two fundamentals (denoted by y and z, representing demand shocks with different degrees of persistence or a supply shock and a demand shock). As long as inventory is non zero, the spot price must equal the discounted forward price, where the forward price in turn satisfies a differential equation of the Black-Scholes type. The level of inventory is the result of an inter-temporal optimization problem.

Pirrong solves all these problems numerically using a discrete grid of values for x, y and z. Moreover, to use numerical methods, time (t) must also be discretized – Pirrong uses a time interval of one day and the forward prices are for one-day maturity. After discretization, the optimization becomes a stochastic dynamic programming problem. For each day on the grid, a series of problems have to be solved to get the spot price and forward price functions. For each value of inventory in the x grid, a two dimensional partial differential equation has to be solved numerically to get the grid of forward prices associated with that level of inventory. Then for each point in the x-y-z grid, a fixed point (or root finding) problem has to be solved to determine the closing inventory at that date. Once opening and closing inventories are known, the spot price is determined by equating supply and demand. All this has to be repeated for each date: the dynamic programming problem has to be solved recursively starting from the terminal date.

In this process, the computation of forward prices assumes a spot price function, and the spot price function assumes a forward price function. The solution of the stochastic dynamic programming problem consists essentially of iterating this process until the process converges (the new value of the spot price function is sufficiently close to the previous value).

Pirrong reports that the solution of the stochastic dynamic programming problem takes six hours on a 1.2GHz computer. To calibrate the volatility, persistence and correlation of the fundamentals to observed data, it is necessary to run an extended Kalman Filter and the stochastic dynamic programming problem has to be solved for each value of these parameters. All in all, the computational process is close to the limits of what is possible without massive distributed computing. Pirrong reports that when he tried to add one more state variable, the computations did not converge despite running for 20 days on a fast desktop computer.

Though the numerical solution used only one-day forward prices, it is possible to obtain longer maturity (one-year and two-year) forward prices as well as option prices by solving the Black-Scholes type partial differential equation numerically. Pirrong shows that models of this type are able to explain several empirical phenomena.

Perhaps, it should be possible to use models of this kind elsewhere in finance. Term structure models are one obvious problem with similarities to the storage problem.

Posted at 14:41 on Thu, 15 Mar 2012     View/Post Comments (0)     permanent link


Wed, 14 Mar 2012

Glimcher: Foundations of Neuroeconomics Analysis

Over the last several weeks, I have been slowly assimilating Paul Glimcher’s Foundations of Neuroeconomics Analysis. Most of the neuroeconomics that I had read previously was written by economists (particularly behavioural economomists) who have ventured into neuroscience. Glimcher is a neuroscientist who has ventured into psychology and economics. It appears to me that this make a very profound difference.

First of all, neuroscientists (and biologists in general) treat the human brain (and more generally the animal brain) with an enormous amount of respect. The biologists’ view is that an organ that has evolved over hundreds of millions of years must be pretty close to perfection. For example, Glimcher points out that the ability of a rod cell in the human eye to detect a single photon of light “places human vision at the physical limits of light sensitivity imposed by quantum physics” (page 145). Similarly, the detection of image features in the visual cortex uses Gabor functions which also have well known optimality properties (page 237).

This view needs to be reconciled with the findings of psychologists and behavioural economists that the human brain makes the most egregious mistakes on very simple verbal problems. Glimcher provides one answer – evolution performs a constrained optimization in which greater accuracy has to be constantly balanced against greater computational costs (the brain consumes a disproportionate amount of energy despite its small size). Once again, this trade-off is carried out in a near perfect manner (pages 276-278). I would think that Gigerenzer’s Rationality for Mortals is another way of looking at this puzzle – many of these verbal problems are totally different from the problems that the brain has encountered during millions of years of evolution.

The second profound difference is that biologists do not put human behaviour on a totally different pedestal from animal behaviour. They tend to believe that the neural processes of a rhesus monkey are very similar to that of human beings. After all, they are separated by a mere 25 million years of evolution (page 169). Economists and psychologists probably have a much more anthropocentric view of the world. On this, I am with the biologists; in the whole of human history, anthropocentrism has at almost all times and in almost all contexts been a delusion.

This leads to a third big difference in neuroeconomics itself. Much of Glimcher’s book is based on studies of single neurons or multiple neurons and is therefore extremely precise and detailed. Highly intrusive single neuron studies are obviously much easier to do on animals than on human beings. Much of the neuroeconomics written by economists is therefore based on functional magnetic resonance imaging (functional MRI or fMRI) which provides only a very coarse grained picture of what is going on inside the brain but is easy to do on human beings. The problem is that if one reads only the fMRI based neuroeconomics, one gets the feeling that neuroscience is highly speculative and imprecise.

Glimcher’s book also leads to a view of economics in which economic constructs like utility and maximization are reified in the form of physical representations inside the brain. I am tempted to call this Platonic economics (drawing an analogy with Platonic realism in philosophy), but Glimcher refers to this as “because models” instead of “as if models” – individuals do not act as if they maximize expected utility; they actually compute expected utility and maximize it. There are neural processes that actually encode expected utility and there are neural processes that actually compute the argmax of a function.

One of the interesting aspects of this process of reification is the detailed discussion of the neural mechanisms behind the “reference point” of prospect theory. Glimcher argues that “all sensory encoding is reference dependent: nowhere in the nervous system are the objective values of consumable rewards encoded. ” Glimsher raises the tantalising possibility that temporal difference learning models could allow the reference point to be unambiguously identified (page 321 et seq).

Another important observation is that directly experienced probability and verbally communicated probabilities are totally different things. When random events are directly experienced, there are neural mechanisms that compute the expected utility directly without probabilities and utilities being separately available for subsequent processing. As predicted by learning theory, these probabilities reflect an underweighting of low-probability events (because of a high learning rate). Symbolically communicated probabilities are a different thing altogether where we find the standard Kahnemann-Tversky phenomenon of overweighting of low-probability events.

Expected subjective values constructed from highly symbolic information are an evolutionarily new event, although they are also hugely important features of our human economies, and it may be the novelty of this kind of expectation that is problematic. ... If [symbolically communicated probabilities] is a phenomenon that lies outside the range of human maximization behavior, then we may need to rethink key elements of the neoclassical program. (page 373)

This too is probably related to Gigerenzer’s finding that frequencies work much better than probabilities in symbolically communicated problems and that single event probabilities are handled very badly.

Posted at 20:09 on Wed, 14 Mar 2012     View/Post Comments (0)     permanent link


Thu, 08 Mar 2012

Decumulation phase of retirement savings

During the last couple of decades, pension reforms have focused much attention on the accumulation phase in which individuals build up their retirement savings. Well designed defined contribution schemes have incorporated insights from neo-classical finance and behavioural finance to create low cost well diversified savings vehicles with simple default options. Much less attention has been paid to the decumulation phase after retirement where the savings are drawn down.

A report last month from the National Association of Pension Funds (NAPF), and the Pensions Institute in the UK argues that investor ignorance combined with lack of transparency and undesirable industry practices lead to large losses for the investors. According to the report:

Each annual cohort of pensioners loses in total around £500m-£1bn in lifetime income. This could treble as schemes mature and auto-enrolment brings 5-8m more employees into the system.

This represents 5-10% of the annual amount consumers spend on annuities.

The report makes a number of excellent suggestions including creating a default option for annuitization. I would argue that a more radical approach would ultimately be needed.

In the accumulation phase, the key advance was the distinction between systematic/market risk and diversifiable/idiosyncratic risk. By restricting choice to well diversified portfolios, the investor’s choice is dramatically simplified – the only choice that is required is the desired exposure to market risk (proxied by the percentage allocation to equities).

The corresponding distinction in the decumulation phase is between aggregate mortality risk (what I like to call macro-mortality risk) and individual specific mortality risk (micro-mortality). Given a large pool of investors in any defined contribution scheme and some degree of compulsory annuitization, it can be assumed that micro-mortality risk is largely diversified away. Compulsory annuitization eliminates adverse selection to a great extent and large pools provide diversification.

What is left is therefore the risk of a change in population-wide life expectancy or macro-mortality risk. It is not at all self-evident that insurance companies are well equipped to manage this risk. Perhaps, capital markets can deal with this risk better by spreading the risk across large pools of investment capital. In fact, it would make sense for many individuals in the accumulation phase to bear life-expectancy risk (as it increases the period during which their savings can accumulate). At least since the days of the Damsels of Geneva more than two centuries ago, pools of investment capital have been quite willing to speculate on diversified mortality risk. Shiller’s proposal regarding macro futures is another way of implementing this idea.

If we separate out macro mortality risk, then the decumulation phase of retirement savings can be commoditized in exactly the same way that indexation allowed the commoditization of the accumulation phase.

Posted at 22:08 on Thu, 08 Mar 2012     View/Post Comments (0)     permanent link




Optimized for modern standards-compliant browsers like IE 7+ and Firefox 2+