Prof. Jayanth R. Varma's Financial Markets Blog

Photograph About
Prof. Jayanth R. Varma's Financial Markets Blog, A Blog on Financial Markets and Their Regulation

© Prof. Jayanth R. Varma
jrvarma@iimahd.ernet.in

Subscribe to a feed
RSS Feed
Atom Feed
RSS Feed (Comments)

May
Sun Mon Tue Wed Thu Fri Sat
         
26 27 28 29 30
31            
2015
Months
MayJun
Jul Aug Sep
Oct Nov Dec
2014
Months

Powered by Blosxom

Sat, 28 Mar 2015

Reflections on tenth anniversary

My blog reaches its tenth anniversary tomorrow: over ten years, I have published 572 blog posts at a frequency of approximately once a week.

My first genuine blog post (not counting a test post and a “coming soon” post) on March 29, 2005 was about an Argentine creditor (NML Capital) trying to persuade a US federal judge (Thomas Griesa) to attach some bonds issued by Argentina. The idea that a debtor’s liabilities (rather than its assets) could be attached struck me as funny. Ten years on, NML and Argentina are still battling it out before Judge Griesa, but things have moved from the comic to the tragic (at least from the Argentine point of view).

The most fruitful period for my blog (as for many other blogs) was the global financial crisis and its aftermath. The blog posts and the many insightful comments that my readers posted on the blog were the principal vehicle through which I tried to understand the crisis and to formulate my own views about it. During the last year or so, things have become less exciting. The blogosphere has also become a lot more crowded than it was when I began. Many times, I find myself abandoning a potential blog post because so many others have already blogged about it.

When I look back at the best bloggers that I followed in the mid and late 2000s, some have quit blogging because they found that they no longer had enough interesting things to say; a few have sold out to commercial organizations that turned these blogs into clickbaits; at least one blogger has died; some blogs have gradually declined in relevance and quality; and only a tiny fraction have remained worthwhile blogs to read.

The tenth anniversary is therefore less an occasion for celebration, and more a reminder of senescence and impending mortality for a blog. I am convinced that I must either reinvent my blog or quit blogging. April and May are the months during which I take a long vacation (both from my day job and from my blogging). That gives me enough time to think about it and decide.

If you have some thoughts and suggestions on what I should do with my blog, please use the comments page to let me know.

Posted at 15:50 on Sat, 28 Mar 2015     View/Post Comments (8)     permanent link


Sat, 28 Feb 2015

How does a bank say that its employees are a big security risk?

Very simple. Describe them as your greatest resource!

In my last blog post, I pointed out that the Carbanak/Anunak hack was mainly due to the recklessness of the banks’ own employees and system administrators. Now that they are aware of this, banks have to disclose this as another risk factor in their regulatory filings. Here is how one well known US bank made this disclosure in their Form 10K (page 39) last week (h/t the ever diligent Footnoted.com):

We are regularly the target of attempted cyber attacks, including denial-of-service attacks, and must continuously monitor and develop our systems to protect our technology infrastructure and data from misappropriation or corruption.

...

Notwithstanding the proliferation of technology and technology-based risk and control systems, our businesses ultimately rely on human beings as our greatest resource, and from time-to-time, they make mistakes that are not always caught immediately by our technological processes or by our other procedures which are intended to prevent and detect such errors. These can include calculation errors, mistakes in addressing emails, errors in software development or implementation, or simple errors in judgment. We strive to eliminate such human errors through training, supervision, technology and by redundant processes and controls. Human errors, even if promptly discovered and remediated, can result in material losses and liabilities for the firm.

Posted at 14:42 on Sat, 28 Feb 2015     View/Post Comments (1)     permanent link


Sun, 22 Feb 2015

Carbanak/Anunak: Patient Bank Hacking

There were a spate of press reports a week back about a group of hackers (referred to as the Carbanak or Anunak group) who had stolen nearly a billion dollars from close to a hundred different banks and financial institutions from around the world. I got around to reading the technical reports about the hack only now: the Kaspersky report and blog post as well as the Group-IB/Fox-IT report of December 2014 and their recent update. A couple of blog posts by Brian Krebs also helped.

The two technical analyses differ on a few details: Kaspersky suggests that the hackers had a Chinese connection while Group-IB/Fox-IT suggests that they were Russian. Kaspersky also seems to have had access to some evidence discovered by law enforcement agencies (including files on the servers used by the hackers). Group-IB/Fox-IT talk only about Russian banks as the victims while Kaspersky reveals that some US based banks were also hacked. But by and large the two reports tell a similar story.

The hackers did not resort to the obvious ways of skimming money from a bank. To steal money from an ATM, they did not steal customer ATM cards or PIN numbers. Nor did they tamper with the ATM itself. Instead they hacked into the personal computers of bank staff including system administrators and used these hacked machines to send instructions to the ATM using the banks’ ATM infrastructure management software. For example, an ATM uses Windows registry keys to determine which tray of cash contains 100 ruble notes and which contains 5000 ruble notes. For example, the CASH_DISPENSER registry key might have VALUE_1 set to 5000 and VALUE_4 set to 100. A system administrator can change these settings to tell the ATM that the cash has been loaded into different bins by setting VALUE_1 to 100 and VALUE_4 to 5000 and restarting Windows to let the new values take effect. The hackers did precisely that (using the system administrators’ hacked PCs) so that the ATM which thinks it is dispensing 1000 rubles in the form of ten 100 ruble notes would actually dispense 50,000 rubles (ten 5000 ruble notes).

Similarly, an ATM has a debug functionality to allow a technician to test the functioning of the ATM. With the ATM vault door open, a technician could issue a command to the ATM to dispense a specified amount of cash. There is no hazard here because with the vault door open, the technician anyway has access to the whole cash without issuing any command. With access to the system administrators’ machines, the hackers simply deleted the piece of code that checked whether the vault door was open. All that they needed to do was to have a mole stand in front of the ATM when they issued a command to the ATM to dispense a large amount of cash.

Of course, ATMs were not the only way to steal money. Online fund transfer systems could be used to transfer funds to accounts owned by the hackers. Since the hackers had compromised the administrators’ accounts, they had no difficulty getting the banks to transfer the money. The only problem was to prevent the money from being traced back to the hackers after the fraud was discovered. This was achieved by using several layers of legal entities before being loaded into hundreds of credit cards which had been prepared in advance.

It is a very effective way to steal money, but it requires a lot of patience. “The average time from the moment of penetration into the financial institutions internal network till successful theft is 42 days.” Using emails with malicious attachments to hack a bank employee’s computer, the hackers patiently worked their way laterally infecting the machines of other employees until they succeeded in compromising a system administrator’s machine. Then they collected data patiently about the banks’ internal systems using screenshots and videos sent from the administrator’s machines by the hackers’ malware. Once they understood the internal systems well, they could use the systems to steal money.

The lesson for banks and financial institutions is that it is not enough to ensure that the core computer systems are defended in depth. The Snowden episode showed that the most advanced intelligence agencies in the world are vulnerable to subversion by their own administrators. The Carbanak/Anunak incident shows that well defended bank systems are vulnerable to the recklessness of their own employees and system administrators using unpatched Windows computers and carelessly clicking on malicious email attachments.

Posted at 10:54 on Sun, 22 Feb 2015     View/Post Comments (0)     permanent link


Thu, 19 Feb 2015

Loss aversion and negative interest rates

Loss aversion is a basic tenet of behavioural finance, particularly prospect theory. It says that people are averse to losses and become risk seeking when confronted with certain losses. There is a huge amount of experimental evidence in support of loss aversion, and Daniel Kahneman won the Nobel Prize in Economics mainly for his work in prospect theory.

What are the implications of prospect theory for an economy with pervasive negative interest rates? As I write, German bund yields are negative up to a maturity of five years. Swiss yields are negative out to eight years (until a few days back, it was negative even at the ten year maturity). France, Denmark, Belgium and Netherlands also have negative yields out to at least three years.

A negative interest rate represents a certain loss to the investor. If loss aversion is as pervasive in the real world as it is in the laboratory, then investors should be willing to accept an even more negative expected return in risky assets if these risky assets offer a good chance of avoiding the certain loss. For example, if the expected return on stocks is -1.5% with a volatility of 15%, then there is a 41% chance that the stock market return is positive over a five year horizon (assuming a normal distribution). If the interest rate is -0.5%, a person with sufficiently strong loss aversion would prefer the 59% chance of loss in the stock market to the 100% chance of loss in the bond market. Note that this is the case even though the expected return on stocks in this example is less than that on bonds. As loss averse investors flee from bonds to stocks, the expected return on stocks should fall and we should have a negative equity risk premium. If there are any neo-classical investors in the economy who do not conform to prospect theory, they would of course see this as a bubble in the equity market; but if laboratory evidence extends to the real world, there would not be many of them.

The second consequence would be that we would see a flipping of the investor clientele in equity and bond markets. Before rates went negative, the bond market would have been dominated by the most loss averse investors. These highly loss averse investors should be the first to flee to the stock markets. At the same time, it should be the least loss averse investors who would be tempted by the higher expected return on bonds (-0.5%) than on stocks (-1.5%) and would move into bonds overcoming their (relatively low) loss aversion. During the regime of positive interest rates and positive equity risk premium, the investors with low loss aversion would all have been in the equity market, but they would now all switch to bonds. This is the flipping that we would observe: those who used to be in equities will now be in bonds, and those who used to be in bonds will now be in equities.

This predicted flipping is a testable hypothesis. Examination of the investor clienteles in equity and bond markets before and after a transition to negative interest rates will allow us to test whether prospect theory has observable macro consequences.

Posted at 17:07 on Thu, 19 Feb 2015     View/Post Comments (5)     permanent link


Wed, 04 Feb 2015

Bank deposits without those exotic swaptions

Yesterday, the Reserve Bank of India did retail depositors a favour: it announced that it would allow banks to offer “non-callable deposits”. Currently, retail deposits are callable (depositors have the facility of premature withdrawal).

Why can the facility of premature withdrawal be a bad thing for retail depositors? It would clearly be a good thing if the facility came free. But in a free market, it would be priced. The facility of premature withdrawal is an embedded American-style swaption and a callable deposit is just a non callable deposit bundled with that swaption whether the depositor wants that bundle or not. You pay for the swaption whether you need it or not.

Most depositors would not exercise that swaption optimally for the simple reason that optimal exercise is a difficult optimization problem to solve. Fifteen years ago, Longstaff, Santa-Clara and Schwartz wrote a paper showing that Wall Street firms were losing billions of dollars because they were using over simplified (single factor) models to exercise American-style swaptions (“Throwing away a billion dollars: The cost of suboptimal exercise strategies in the swaptions market.”, Journal of Financial Economics 62.1 (2001): 39-66.). Even those simplified (single factor) models would be far beyond the reach of most retail depositors. It is safe to assume that almost all retail depositors behave suboptimally in exercising their premature withdrawal option.

In a competitive market, the callable deposits would be priced using a behavioural exercise model and not an optimal exercise strategy. Still the problem remains. Some retail depositors would exercise their swaptions better than others. A significant fraction might just ignore the swaption unless they have a liquidity need to withdraw the deposits. These ignorant depositors would subsidize the smarter depositors who exercise it frequently (though still suboptimally). And it makes no sense at all for the regulator to force this bad product on all depositors.

Post global financial crisis, there is a push towards plain vanilla products. The non callable deposit is a plain vanilla product. The current callable version is a toxic/exotic derivative.

Posted at 21:37 on Wed, 04 Feb 2015     View/Post Comments (1)     permanent link


Tue, 27 Jan 2015

The politics of SEC enforcement or is it data mining?

Last month, Jonas Heese published a paper on “Government Preferences and SEC Enforcement” which purports to show that the US Securities and Exchange Commission (SEC) refrains from taking enforcement action against companies for accounting restatements when such action could cause large job losses particularly in an election year and particularly in politically important states. The results show that:

All the econometrics appear convincing:

But then, I realized that there is one very big problem with the paper – the definition of labour intensity:

I measure LABOR INTENSITY as the ratio of the firm’s total employees (Compustat item: EMP) scaled by current year’s total average assets. If labor represents a relatively large proportion of the factors of production, i.e., labor relative to capital, the firm employs relatively more employees and therefore, I argue, is less likely to be subject to SEC enforcement actions.

Seriously? I mean, does the author seriously believe that politicians would happily attack a $1 billion company with 10,000 employees (because it has a relatively low labour intensity of 10 employees per $1 million of assets), but would be scared of targeting a $10 million company with 1,000 employees (because it has a relatively high labour intensity of 100 employees per $1 million of assets)? Any politician with such a weird electoral calculus is unlikely to survive for long in politics. (But a paper based on this alleged electoral calculus might even get published!)

I now wonder whether the results are all due to data mining. Hundreds of researchers are trying many things: they are choosing different subsets of SEC enforcement actions (say accounting restatements), they are selecting different subsets of companies (say non financial companies) and then they are trying many different ratios (say employees to assets). Most of these studies go nowhere, but a tiny minority produce significant results and they are the ones that we get to read.

Posted at 14:39 on Tue, 27 Jan 2015     View/Post Comments (0)     permanent link


Thu, 22 Jan 2015

Why did the Swiss franc take half a million milliseconds to hit one euro?

Updated

In high frequency trading, nine minutes is an eternity: it is half a million milliseconds – enough time for five billion quotes to arrive in the hyperactive US equity options market at its peak rate. On a human time scale, nine minutes is enough time to watch two average online content videos.

So what puzzles me about the soaring Swiss franc last week (January 15) is not that it rose so much, nor that it massively overshot its fair level, but that the initial rise took so long. Here is the time line of how the franc moved:

It appears puzzling to me that no human trader was taking out every euro bid in sight at around 9:33 am or so. I find it hard to believe that somebody like a George Soros in his heyday would have taken more than a couple of minutes to conclude that the euro would drop well below 1.00. It would then make sense to simply lift every euro bid above 1.00 and then wait for the point of maximum panic to buy the euros back.

Is it that high frequency trading has displaced so many human traders that there are too few humans left who can trade boldly when the algorithms shut down? Or are we in a post crisis era of mediocrity in the world of finance?

Updated to correct 9:03 to 9:33, change eight billion to five billion and end the penultimate sentence with a question mark.

Posted at 07:26 on Thu, 22 Jan 2015     View/Post Comments (1)     permanent link


Tue, 13 Jan 2015

RBI is also concerned about two hour resumption time for payment systems

Two months back, I wrote a blog post on how the Basel Committee on Payments and Market Infrastructures was reckless in insisting on a two hour recovery time even from severe cyber attacks.

I think that extending the business continuity resumption time target to a cyber attack is reckless and irresponsible because it ignores Principle 16 which requires an FMI to “safeguard its participants’ assets and minimise the risk of loss on and delay in access to these assets.” In a cyber attack, the primary focus should be on protecting participants’ assets by mitigating the risk of data loss and fraudulent transfer of assets. In the case of a serious cyber attack, this principle would argue for a more cautious approach which would resume operations only after ensuring that the risk of loss of participants’ assets has been dealt with. ... The risk is that payment and settlement systems in their haste to comply with the Basel mandates would ignore security threats that have not been fully neutralized and expose their participants’ assets to unnecessary risk. ... This issue is all the more important for countries like India whose enemies and rivals include some powerful nation states with proven cyber capabilities.

I am glad that last month, the Reserve Bank of India (RBI) addressed this issue in its Financial Stability Report. Of course, as a regulator, the RBI uses far more polite words than a blogger like me, but it raises almost the same concerns (para 3.58):

One of the clauses 31 under PFMIs requires that an FMI operator’s business continuity plans must ‘be designed to ensure that critical information technology (IT) systems can resume operations within two hours following disruptive events’ and that there can be ‘complete settlement’ of transactions ‘by the end of the day of the disruption, even in the case of extreme circumstances’. However, a rush to comply with this requirement may compromise the quality and completeness of the analysis of causes and far-reaching effects of any disruption. Restoring all the critical elements of the system may not be practically feasible in the event of a large-scale ‘cyber attack’ of a serious nature on a country’s financial and other types of information network infrastructures. This may also be in conflict with Principle 16 of PFMIs which requires an FMI to safeguard the assets of its participants and minimise the risk of loss, as in the event of a cyber attack priority may need to be given to avoid loss, theft or fraudulent transfer of data related to financial assets and transactions.

Posted at 13:53 on Tue, 13 Jan 2015     View/Post Comments (0)     permanent link


Sat, 03 Jan 2015

Heterogeneous investors and multi factor models

I read two papers last week that introduced heterogeneous investors into multi factor asset pricing models. The papers help produce a better understanding of momentum and value but they seem to raise as many questions as they answer. The easier paper is A Tug of War: Overnight Versus Intraday Expected Returns by Dong Lou, Christopher Polk, and Spyros Skouras. They show that:

100% of the abnormal returns on momentum strategies occur overnight; in stark contrast, the average intraday component of momentum profits is economically and statistically insignificant. ... In stark contrast, the profits on size and value ... occur entirely intraday; on average, the overnight components of the profits on these two strategies are economically and statistically insignificant.

The paper also presents some evidence that “is consistent with the notion that institutions tend to trade intraday while individuals are more likely to trade overnight.” In my view, their evidence is suggestive but by no means compelling. The authors also claim that individuals trade with momentum while institutions trade against it. If momentum is not a risk factor but a free lunch, then this would imply that individuals are smart investors.

The NBER working paper (Capital Share Risk and Shareholder Heterogeneity in U.S. Stock Pricing) by Martin Lettau, Sydney C. Ludvigson and Sai Ma presents a more complex story. They claim that rich investors (those in the highest deciles of the wealth distribution) invest disproportionately in value stocks, while those in lower wealth deciles invest more in momentum stocks. They then examine what happens to the two classes of investors when there is a shift in the share of income in the economy going to capital as opposed to labour. Richer investors derive most of their income from capital and an increase in the capital share benefits them. On the other hand, investors from lower deciles of wealth derive most of their income from labour and an increase in the capital share hurts them.

Finally, the authors show very strong empirical evidence that the value factor is positively correlated with the capital share while momentum is negatively correlated. This would produce a risk based explanation of both factors. Value stocks lose money when the capital share is moving against the rich investors who invest in value and therefore these stocks must earn a risk premium. Similarly, momentum stocks lose money when the capital share is moving against the poor investors who invest in momentum and therefore these stocks must also earn a risk premium.

The different portfolio choices of the rich and the poor is plausible but not backed by any firm data. The direction of causality may well be in the opposite direction: Warren Buffet became rich by buying value stocks; he did not invest in value because he was rich.

But the more serious problem with their story is that it implies that both rich and poor investors are irrational in opposite ways. If their story is correct, then the rich must invest in momentum stocks to hedge capital share risk. For the same reason, the poor should invest in value stocks. In an efficient market, investors should not earn a risk premium for stupid portfolio choices. (Even in a world of homogeneous investors, it is well known that a combination of value and momentum has a better risk-return profile than either by itself: see for example, Asness, C. S., Moskowitz, T. J. and Pedersen, L. H. (2013), Value and Momentum Everywhere. The Journal of Finance, 68: 929-985)

Posted at 17:16 on Sat, 03 Jan 2015     View/Post Comments (0)     permanent link


Sun, 21 Dec 2014

FCA Clifford Chance Report Part II: The Menace of Selective Briefing

Yesterday, I blogged about Clifford Chance report on the UK FCA (Financial Conduct Authority) from the viewpoint of regulatory capture. Today, I turn to the issue of the selective pre-briefing provided by the FCA to journalists and industry bodies. Of course, the FCA is not alone in doing this: government agencies around the world indulge in this anachronistic practice.

In the pre internet era, government agencies had to rely on the mass media to disseminate their policies and decisions. It was therefore necessary for them to cultivate the mass media to ensure that their messages got the desired degree of coverage. One of the ways of doing this was to provide privileged access to select journalists in return for enhanced coverage.

This practice is now completely anachronistic. The internet has transformed the entire paradigm of mass communication. In the old days, we had a push channel in which the big media outlets pushed their content out to consumers. The internet is a pull channel in which consumers pull whatever content they want. For example, I subscribe to the RSS/Atom feeds of several regulators around the world. I also subscribe to the feeds of several blogs which comment on regulatory developments world wide. My feed reader pulls all this content to my computer and mobile devices and provides me instant excess to these messages without the intermediation of any big media gatekeepers.

In this context, the entire practice of pre-briefing is anachronistic. Worse, it is inimical to the modern democratic ideals of equal and fair access to all. The question then is why does it survive at all. I am convinced that what might have had some legitimate function decades ago has now been corrupted into something more nefarious. Regulators now use privileged access to suborn the mass media and to get favourable coverage of their decisions. Journalists have to think twice before they write something critical about the regulator who may simply cut off their privileged access.

It is high time we put an end to this diabolical practice. What I would like to see is the following:

  1. A regulator could meet a journalist one-on-one, but the entire transcript of the interview must then be published on the regulator’s website and the interview must be embargoed until such publication.
  2. A regulator could hold press conferences or grant live interviews to the visual media, but such events must be web cast live on the regulator’s website and transcripts must be published soon after.
  3. The regulators should not differentiate between (a) journalists from the mainstream media and (b) representatives of alternate media (including bloggers).
  4. Regulator web sites and feeds must be more friendly to the general public. For example, the item description field in an RSS feed or the item content field in an Atom feed should contain enough information for a casual reader to decide whether it is worth reading in full. Regulatory announcements must provide enough background to enable the general public to understand them.

Any breach of (1) or (2) above should be regarded as a selective disclosure that attracts the same penalties as selective disclosure by an officer of a listed company.

What I also find very disturbing is the practice of the regulator holding briefing sessions with select group of regulated entities or their associations or lobby groups. In my view, while the regulator does need to hold confidential discussions with regulated entities on a one-on-one basis, any meeting attended by more than one entity cannot by definition be about confidential supervisory concerns. The requirement of publication of transcripts or live web casts should apply in these cases as well. In the FCA case, it seems to be taken for granted by all (including the Clifford Chance report) that the FCA needs to have confidential discussions with the Association of British Insurers (ABI). I think this view is mistaken, particularly when it is not considered necessary to hold a similar discussion with the affected policy holders.

Posted at 17:06 on Sun, 21 Dec 2014     View/Post Comments (0)     permanent link


Sat, 20 Dec 2014

Regulatory capture is a bigger issue than botched communications

I just finished reading the 226 page report that the non independent directors of the UK FCA (Financial Conduct Authority) commissioned from the law firm Clifford Chance on the FCA’s botched communications regarding its proposed review of how insurance companies treat customers trapped in legacy pension plans. The report published earlier this month deals with the selective disclosure of market moving price sensitive information by the FCA itself to one journalist, and the failure of the FCA to issue corrective statements in a timely manner after large price movements in the affected insurance companies on March 28, 2014.

I will have a separate blog post on this whole issue of selective disclosure to journalists and to industry lobby groups. But in this post, I want to write about what I think is the bigger issue in the whole episode: what appears to me to be a regulatory capture of the Board of the FCA and of HM Treasury. It appears to me that the commissioning of the Clifford Chance review serves to divert attention from this vital issue and allows the regulatory capture to pass unnoticed.

The rest of this blog post is based on reading between the lines in the Clifford Chance report and is thus largely speculative. The evidence of regulatory capture is quite stark, but most of the rest of the picture that I present could be totally wrong.

The sense that I get is that there were two schools of thought within the FCA. One group of people thought that the FCA needed to do something about the 30 million policy holders who were trapped in exploitative pension plans that they could not exit because of huge exit fees. Since the plans were contracted prior to 2000 (in some cases they dated back to the 1970s), they did not enjoy the consumer protections of the current regulatory regime. This group within the FCA wanted to use the regulator’s powers to prevent these policy holders from being treated unfairly. The simplest solution of course was to abolish the exit fees, and let these 30 million policy holders choose new policies.

The other group within the FCA wanted to conduct a cosmetic review so that the FCA would be seen to be doing something, but did not want to do anything that would really hurt the insurance companies who made tons of money off these bad policies. Much of the confusion and lack of coordination between different officials of the FCA brought out in the Clifford Chance report appears to me to be only a manifestation of the tension between these two views within the FCA. It was critical for the second group’s strategy to work that the cosmetic review receive wide publicity that would fool the public into thinking that something was being done. Hence the idea of doing a selective pre-briefing to a journalist known to be sympathetic to the plight of the poor policy holders. The telephonic briefing with this journalist was not recorded, and was probably ambiguous enough to maintain plausible deniability.

The journalist drew the reasonable inference that the first group in the FCA had won and that the FCA was serious about giving a fair deal to the legacy policy holders and reported accordingly. What was intended to fool only the general public ended up fooling the investors as well, and the stock prices of the affected insurance companies crashed after the news report came out. The big insurance companies were now scared that the review might be a serious affair after all and pulled out all their resources to protect their profits. They reached out to the highest levels of the FCA and HM Treasury and ensured that their voice was heard. Regulatory capture is evident in the way in which the FCA abandoned even the pretence of serious action, and became content with cosmetic measures. Before the end of the day, a corrective statement came out of the FCA which made all the right noises about fairness, but made it clear that exit fees would not be touched.

The journalist in question (Dan Hyde of the Telegraph) nailed this contradiction in an email quoted in the Clifford Chance report (para 16.8)

But might I suggest that by any standard an exit fee that prevents a customer from getting a fairer deal later in life is in itself an unfair term on a policy.

On March 28, 2014, the top brass of the FCA and HM Treasury could see the billions of pounds wiped out on the stock exchange from the market value of the insurance companies, and they could of course hear the complaints from the chairmen of those powerful insurance companies. There was no stock exchange showing the corresponding improvement in the net worth of millions of policy holders savouring the prospect of escape from unfair policies, and their voice was not being heard at all. Out of sight, out of mind.

Posted at 18:25 on Sat, 20 Dec 2014     View/Post Comments (0)     permanent link


Sat, 13 Dec 2014

Unwarranted complacency about regulated financial entities

Two days back, the Securities and Exchange Board of India (SEBI) issued a public Caution to Investors about entities that make false promises and assure high returns. This is quite sensible and also well intentioned. But the first paragraph of the press release is completely wrong in asking investors to focus on whether the investment is being offered by a regulated or by an unregulated entity:

It has come to the notice of Securities and Exchange Board of India (SEBI) that certain companies / entities unauthorisedly, without obtaining registration and illegally are collecting / mobilising money from the general investors by making false promises, assuring high return, etc. Investors are advised to be careful if the returns offered by the person/ entity is very much higher than the return offered by the regulated entities like banks, deposits accepted by Companies, registered NBFCs, mutual funds etc.

This is all wrong because the most important red flag is the very high return itself, and not the absence of registration and regulation. That is the key lesson from the Efficient Markets Hypothesis:

If something appears too good to be true, it is not true.

For the purposes of this proposition, it does not matter whether the entity is regulated. To take just one example, Bernard L. Madoff Investment Securities LLC was regulated by the US SEC as a broker dealer and as an investment advisor. Fairfield Greenwich Advisors LLC (through whose Sentry Fund, many investors invested in Madoff’s Ponzi scheme) was also an SEC regulated investment advisor.

Regulated entities are always very keen to advertise their regulated status as a sign of safety and soundness. (Most financial entities usually prefer light touch regulation to no regulation at all.) But regulators are usually at pains to avoid giving the impression that regulation amounts to a seal of approval. For example, every public issue prospectus in India contains the disclaimer:

The Equity Shares offered in the Issue have not been recommended or approved by the Securities and Exchange Board of India

In this week’s press release however, SEBI seems to have inadvertently lowered its guard, and has come dangerously close to implying that regulation is a seal of approval and respectability. Many investors would misinterpret the press release as saying that it is quite safe to put money in a bank deposit or in a mutual fund. No, that is not true at all: the bank could fail, and market risks could produce large losses in a mutual fund.

Posted at 15:44 on Sat, 13 Dec 2014     View/Post Comments (1)     permanent link


Mon, 08 Dec 2014

Why no two factor authentication for advance tax payments in India?

I made an advance tax payment online today and it struck me that the bank never asks for two factor authentication for advance tax payments. It seems scandalous to me that payments of several hundreds of thousands of rupees are allowed without two factor authentication at a time when the online taxi companies are not allowed to bypass two factor authentication for payments of a few hundred rupees.

I can think of a couple of arguments why advance tax is different, but none are convincing:

The point is that the rule of law demands that the same requirements apply to one and all. The “King can do no wrong” argument is inconsistent with the rule of law in a modern democracy. I believe that all payments above some threshold should require two factor authentication.

Posted at 16:41 on Mon, 08 Dec 2014     View/Post Comments (0)     permanent link


Thu, 04 Dec 2014

On fickle foreign direct investment and patient foreign portfolio capital

No that is not a typo; I am asserting the opposite of the conventional wisdom that foreign portfolio investment is fickle while foreign direct investment is more reliable. The conventional wisdom was on display today in news reports about the parliament’s apparent willingness to allow foreign direct investment in the insurance sector, but not foreign portfolio investment.

The conventional wisdom is propagated by macroeconomists who look at the volatility of aggregate capital flows – it is abundantly clear that portfolio flows stop and reverse during crisis periods (“sudden stops”) while FDI flows are more stable. Things look very different at the enterprise level, but economists working in microeconomics and corporate finance who can see a different world often do not bother to discuss policy issues.

Let me therefore give an example from the Indian banking industry to illustrate what I mean. In the late 1990s, after the Asian Crisis, one of the largest banks in the world decided that Asia was a dangerous place to do banking and sold a significant part of their banking operations in India and went home. That is what I mean by fickle FDI. At the same time, foreign portfolio investors were providing tons of patient capital to Indian private banks like HDFC, ICICI and Axis to grow their business in India. In the mid 1990s, many people thought that liberalization would allow foreign banks to thrive; in reality, they lost market share (partly due to the fickleness and short termism of their parents), and it is the Indian banks funded by patient foreign portfolio capital that gained a large market share.

In 2007, as the Great Moderation was about to end, but markets were still booming, ICICI Bank tapped the markets to raise $5 billion of equity capital (mainly from foreign portfolio investors) in accordance with the old adage of raising equity when it is available and not when it is needed. The bank therefore entered the global financial crisis with a large buffer of capital originally intended to finance its growth a couple of years ahead. During the crisis, even this buffer was perceived to be inadequate and the bank needed to downsize the balance sheet to ensure its survival. But without that capital buffer raised in good times, its position would have been a lot worse; it might even have needed a government bailout.

Now imagine that instead of being funded by portfolio capital, ICICI had been owned by say Citi. Foreign parents do not like to fund their subsidiaries ahead of need; they prefer to drip feed the subsidiary with capital as and when needed. In fact, if the need is temporary, the parent usually provides a loan instead of equity so that it can be called back when it is no longer needed. So the Indian subsidiary would have entered the crisis without that large capital buffer. During the crisis, the ability of the embattled parent to provide a large capital injection into its Indian operations would have been highly questionable. Very likely, the Indian subsidiary would have ended up as a ward of the state.

Macro patterns hide these interesting micro realities. The conventional wisdom ignores the fact that enterprise level risk management works to counter the vagaries of the external funding environment. It ignores the standard insight from the markets versus hierarchies literature that a funding that relies on a large number of alternate providers of capital is far more resilient than one that relies on just one provider of capital. In short it is time to overturn the conventional wisdom.

Posted at 16:48 on Thu, 04 Dec 2014     View/Post Comments (1)     permanent link


Thu, 27 Nov 2014

Should governments hedge oil price risk?

I had an extended email conversation last month with a respected economist (who wishes to remain anonymous) about whether governments of oil importing countries should hedge oil price. While there is a decent literature on oil price hedging by oil exporters (for example, this IMF Working Paper of 2001), there does not seem to be much on oil importers. So we ended up more or less debating this from first principles. The conversation helped clarify my thinking, and this blog post summarizes my current views on this issue.

I think that hedging oil price risk does not make much sense for the government of an oil importer for several reasons:

  1. Oil imports are usually not a very large fraction of GDP; by contrast oil exports are often a major chunk of GDP for a large exporter. For most countries, oil price risk is just one among many different macroeconomic shocks that can hit the country. Just as for a company, equity capital is the best hedge against general business risks, for a country, external reserves and fiscal capacity are the best hedges against general macroeconomic shocks.
  2. For a country, the really important strategic risk relating to oil is a supply disruption (embargo for example) and this can be hedged only with physical stocks (like the US strategic oil reserve).
  3. A country is an amorphous entity. Probably, it is the government that will do the hedge, and private players that would consume the oil. Who pays for the hedge and who benefits from it? Does the government want the private players to get the correct price signal? Does it want to subsidize the private sector? If it is the private players who are consuming oil, why don’t we let them hedge the risk themselves?
  4. Futures markets may not provide sufficient depth, flexibility and liquidity to absorb a large importer’s hedging needs. The total open interest in ICE Brent futures is roughly equal to India’s annual crude import.

Frankly, I think it makes sense for the government to hedge oil price risk only if it is running an administered price regime. In this case, we can analyse its hedging like a corporate hedging program. The administered price regime makes the government short oil (it is contracted to sell oil to the private sector at the administered price), and then it makes sense to hedge the fiscal cost by buying oil futures to offset its short position.

But an administered price regime is not a good idea. Even if, for the moment, one accepts the dubious proposition that rapid industrialization requires strategic under pricing of key inputs (labour, capital or energy), we only get an argument for energy price subsidies not for energy price stabilization. The political pressure for short term price stabilization comes from the presence of a large number of vocal consumers (think single truck owners for example) who have large exposures to crude price risk but do not have access to hedging markets. If we accept that the elasticity of demand for crude is near zero in the short term (though it may be pretty high in the long term), then unhedged entities with large crude exposures will find it difficult to tide through the short term during which they cannot reduce demand. They can be expected to be very vocal about their difficulties. The solution is to make futures markets more accessible to small and mid size companies, unincorporated businesses and even self employed individuals who need such hedges. This is what India has done by opening up futures markets to all including individuals. Most individuals might not need these markets (financial savings are the best hedge against most risks for individuals who are not in business). But it is easier to open up the markets to all than to impose complex documentation requirements that restrict access. Easy hedging eliminates the political need for administered energy prices.

With free energy pricing in place, the most sensible hedge for governments is a huge stack of foreign exchange reserves and a large pool of oil under the ground in a strategic reserve.

Posted at 21:29 on Thu, 27 Nov 2014     View/Post Comments (5)     permanent link


Mon, 24 Nov 2014

Ethnic diversity and price bubbles

The socializing finance blog points to a PNAS paper showing that ethnic diversity drastically reduces the incidence of price bubbles in experimental markets. This is a conclusion that I am inclined to believe on theoretical grounds and the paper itself presents the theoretical arguments very persuasively. However, the experimental evidence leaves me unimpressed.

The biggest problem is that in both the locales (Southeast Asia and North America) in which they carried out the experiments:

In the homogeneous markets, all participants were drawn from the dominant ethnicity in the locale; in the diverse markets, at least one of the participants was an ethnic minority.

This means that the experimental design conflates the presence of ethnic diversity with that of ethnic minorities. This is all the more important because for the experiments, they recruited skilled participants, trained in business or finance. There could therefore be a significant self selection bias here in that ethnic minority members who chose to train in business or finance might have been those with exceptional talent or aptitude.

This fear is further aggravated by the result in Figure 2 showing that the Southeast Asian markets performed far better than the North American markets. In fact, the homogeneous Southeast Asian markets did better than the diverse North American markets! The diverse Southeast Asian market demonstrated near perfect pricing accuracy. This suggests that the ethnic fixed effects (particularly the gap between the dominant North American ethnic group and the minority Southeast Asian ethnic group) are very large. A proper experimental design would have had homogeneous markets made out of minority ethnic members as well so that the ethnic fixed effects could be estimated and removed.

Another reason that I am not persuaded by the experimental evidence that the experimental design prevented participants from seeing each other or communicating directly while trading. As the authors state “So, direct social influence was curtailed, but herding was possible.” With a major channel of diversity induced improvement blocked off by the design itself, one’s prior of the size of the diversity effect is lower than it would otherwise be.

Posted at 17:41 on Mon, 24 Nov 2014     View/Post Comments (0)     permanent link


Mon, 17 Nov 2014

CPMI fixation on two hour resumption time is reckless

The Basel Committee on Payments and Market Infrastructures (CPMI, previously known as CPSS) has issued a document about Cyber resilience in financial market infrastructures insisting that payment and settlement systems should be able to resume operations within 2 hours from a cyber attack and should be able to complete the settlement by end of day. The Committee is treating a cyber attack as a business continuity issue and is applying Principle 17 of its Principles for financial market infrastructures. Key Consideration 6 of Principle 17 requires that the business continuity plan “should be designed to ensure that critical information technology (IT) systems can resume operations within two hours following disruptive events” and that the plan “should be designed to enable the FMI to complete settlement by the end of the day of the disruption, even in the case of extreme circumstances”.

I think that extending the business continuity resumption time target to a cyber attack is reckless and irresponsible because it ignores Principle 16 which requires an FMI to “safeguard its participants’ assets and minimise the risk of loss on and delay in access to these assets.” In a cyber attack, the primary focus should be on protecting participants’ assets by mitigating the risk of data loss and fraudulent transfer of assets. In the case of a serious cyber attack, this principle would argue for a more cautious approach which would resume operations only after ensuring that the risk of loss of participants’ assets has been dealt with.

I believe that if there were to be a successful cyber attack against a well run payment and settlement system, the attack would most likely be carried out by a nation-state. Such an attack would therefore be backed by resources and expertise far exceeding what any payment and settlement system would possess. Neutralizing such a threat would require assistance from the national security agencies of its own nation. It is silly to assume that such a cyber war between two nation states would be resolved within two hours just because a Committee in Basel mandates so.

The risk is that payment and settlement systems in their haste to comply with the Basel mandates would ignore security threats that have not been fully neutralized and expose their participants’ assets to unnecessary risk. I think the CPMI is being reckless and irresponsible in encouraging such behaviour.

This issue is all the more important for countries like India whose enemies and rivals include some powerful nation states with proven cyber capabilities. I think that Indian regulators should tell their payment and settlement systems that Principle 16 prevails over Principle 17 in the case of any conflict between the two principles. With this clarification, the CPMI guidance on cyber attacks would be effectively defanged.

Posted at 18:00 on Mon, 17 Nov 2014     View/Post Comments (0)     permanent link


Sun, 09 Nov 2014

Liquidity support for central counterparties

The UK seems to be going in the opposite direction to the US in terms of providing liquidity support to clearing corporations or central counterparties (CCPs). In the US, the amendments by the Dodd Frank Act made it extremely difficult for the central bank to provide liquidity assistance to any non bank. On the other hand, the Bank of England on Wednesday extended its discount window not only to all CCPs but also to systemically important broker-dealers (h/t OTC Space). The Bank of England interprets its liquidity provision function very widely:

As the supplier of the economy’s most liquid asset, central bank money, the Bank is able to be a ‘back-stop’ provider of liquidity, and can therefore provide liquidity insurance to the financial system.

My own view has always been that CCPs should have access to the discount window but only to borrow against the best quality paper (typically, government bonds). If there is a large short fall in the pay-in, a CCP has to mobilize liquidity in probably less than an hour (before pay-out) and the only entity able to provide large amounts of liquidity at such short notice is the central bank. But if a CCP does not have enough top quality collateral on hand, it should be allowed to fail. A quarter century ago, Ben Bernanke argued that it makes sense for the central bank to stand behind even a failing CCP (Ben S. Bernanke, “Clearing and Settlement during the Crash”, The Review of Financial Studies, Vol. 3, No. 1, pp. 133-151). But I would not go that far. Most jurisdictions today are designing resolution mechanisms to deal with failed CCPs, so this should work even in a crisis situation.

Posted at 11:57 on Sun, 09 Nov 2014     View/Post Comments (0)     permanent link


Sat, 01 Nov 2014

Economics of counterfeit notes

If you are trying to sell $200 million of nearly flawless counterfeit $20 currency notes, there is only one real buyer – the US government itself. That seems to be the moral of a story in GQ Magazine about Frank Bourassa.

The story is based largely on Bourassa’s version of events and is possibly distorted in many details. However, the story makes it pretty clear that the main challenge in counterfeiting is not in the manufacture, but in the distribution. Yes, there is a minimum scale in the production process – Bourassa claims that a high end printing press costing only $300,000 was able to achieve high quality fakes. The challenge that he faced was in buying the correct quality of paper. The story does not say why he did not think of vertically integration by buying a mini paper mill, but I guess that is because it is difficult to operate a paper mill secretly unlike the printing press which can be run in a garage without anybody knowing about it. Bourassa was able to proceed because some paper mill somewhere in the world was willing to sell him the paper that he needed.

The whole point of anti counterfeiting technology is to increase the fixed cost of producing a note without increasing the variable cost too much. So high quality counterfeiting is not viable unless it is done in scale. But the distribution of fake notes suffers from huge diseconomies of scale – while it is pretty easy to pass off a few fake notes (especially small denomination notes), Bourassa found that it was difficult to sell large number of notes at even 70% discount to face value. He ended up selling his stockpile to the US government itself. The price was his own freedom.

To prevent counterfeiting, the government needs to ensure that at every possible scale of operations, the combined cost of production and distribution exceeds the face value of the note. At low scale, the high fixed production cost makes counterfeiting uneconomical, while at large scale, the high distribution cost is the counterfeiter’s undoing. That is why the only truly successful counterfeiters have been other sovereigns who have two decisive advantages: first for them the fixed costs are actually sunk costs, and second, they have access to distribution networks that ordinary counterfeiters cannot dream of.

Posted at 21:09 on Sat, 01 Nov 2014     View/Post Comments (3)     permanent link


Wed, 29 Oct 2014

Why is the IMF afraid of negative interest rates?

A few days back, the IMF made a change in its rule for setting interest rates on SDRs (Special Drawing Rights) and set a floor of 5 basis points (0.05%) on this rate. The usual zero lower bound on interest rates does not apply to the SDR as there are no SDR currency notes floating around. The SDR is only a unit of account and to some extent a book entry currency. There is no technical problem with setting the interest rate on the SDR to a substantially negative number like -20%.

In finance theory, there is no conceptual problem with a large negative interest rate. Though we often describe the interest rate (r) as a price, actually it is 1+r and not r itself that is a price. The price of one unit of money a year later in terms of money today is 1+r. Prices have to be non negative, but this only requires that r can not drop below -100%. With bearer currency in circulation, a zero lower bound (ZLB) comes about because savers have the choice of saving in the form of currency and earning a zero interest rate. Actually the return on cash is slightly negative (probably close to -0.5%) because of storage (and insurance) costs. As such, the ZLB is actually not at zero, but at somewhere between -0.25% and -0.50%.

It has long been understood that a book entry (or mere unit of account) currency like the SDR is not subject to the ZLB at all. Buiter for example proposed the use of a parallel electronic currency as a way around the ZLB.

In this context, it is unfortunate that the IMF has succumbed to the fetishism of positive interest rates. At the very least, it has surrendered its potential for thought leadership. At worst, the IMF has shown that it is run by creditor nations seeking to earn a positive return on their savings when the fundamentals do not justify such a return.

Posted at 21:08 on Wed, 29 Oct 2014     View/Post Comments (0)     permanent link


Thu, 23 Oct 2014

Who should interpolate Libor: submitter or administrator?

ICE Benchmark Administration (IBA), the new administrator of Libor has published a position paper on the future evolution of Libor. The core of the paper is a shift to “a more transaction-based approach for determining LIBOR submissions” and a “more prescriptive calculation methodology”. In this post, I discuss the following IBA proposals regarding interpolation and extrapolation:

Interpolation and extrapolation techniques are currently used where appropriate by benchmark submitters according to formulas they have adopted individually.

We propose that inter/extrapolation should be used:

  1. When a benchmark submitter has no available transactions on which to base its submission for a particular tenor but it does have transaction-derived anchor points for other tenors of that currency, and
  2. If the submitter’s aggregate volume of eligible transactions is less than a minimum level specified by IBA.

To ensure consistency, IBA will issue interpolation formula guidelines

Para 5.7.8

In my view, it does not make sense for the submitter to perform interpolations in situations that are sufficiently standardized for the administrator to provide interpolation formulas. It is econometrically much more efficient for the administrator to perform the interpolation. For example, the administrator can compute a weighted average with lower weights on interpolated submission – ideally the weights would be a declining function of the width of the interpolation interval. Thus where many non interpolated submissions are available, the data from other tenors would be virtually ignored (because of low weights). But where there are no non-interpolated submissions, the data from other tenors would drive the computed value. The administrator can also use non linear (spline) interpolation across the full range of tenors. If submitters are allowed to interpolate, perverse outcomes are possible. For example, where the yield curve has a strong curvature but only a few submitters provide data on the correct tenor, these will differ sharply from the incorrect (interpolated) submissions of the majority of the submitters. The standard procedure of ignoring extreme submissions would discard all the correct data and average all the incorrect submissions!

Many people tend to forget that even the computation of an average is an econometric problem that can benefit from the full panoply of econometric techniques. For example, an econometrician might suggest interpolating across submission dates using a Kalman filter. Similarly, covered interest parity considerations would suggest that submissions for Libor in other currencies should be allowed to influence the estimation of Libor in each currency (simultaneous equation rather than single equation estimation). So long as the entire estimation process is defined in open source computer code, I do not see why Libor estimates should not be based on a complex econometric procedure – a Bayesian Vector Auto Regression (VAR) with Garch errors for example.

Posted at 12:08 on Thu, 23 Oct 2014     View/Post Comments (0)     permanent link


Mon, 20 Oct 2014

Online finance and SIM-card security risks

For quite some time now, I have been concerned that the SIM card in the mobile phone is becoming the most vulnerable single point of failure in online security. The threat model that I worry about is that somebody steals your mobile, transfers the SIM card to another phone, and goes about quickly resetting the passwords to your email accounts and other sites where you have provided your mobile number as your recovery option. Using these email accounts, the thief then proceeds to reset passwords on various other accounts. This threat model cannot be blocked by having a strong PIN or pattern lock on the phone or by remotely wiping the device. That is because, the thief is using your SIM and not your phone.

If the thief knows enough of your personal details (name, data of birth and other identifying information), then with a little bit of social engineering, he could do a lot of damage during the couple of hours that it would take to block the SIM card. Remember that during this period, he can send text messages and Whatsapp messages in your name to facilitate his social engineering. The security issues are made worse by the fact that telecom companies simply do not have the incentives and expertise to perform the authentication that financial entities would do. There have been reports of smart thieves getting duplicate SIM cards issued on the basis of fake police reports and forged identity documents (see my blog post of three years ago).

Modern mobile phones are more secure than the SIM cards that we put inside them. They can be secured not only with PIN and pattern locks but also fingerprint scanner and face recognition software. Moreover, they support encryption and remote wiping. It is true that SIM cards can be locked with a PIN which has to be entered whenever the phone is switched off and on or the SIM is put into a different mobile. But I am not sure how useful this would be if telecom companies are not very careful while providing the PUK code which allows the PIN to be reset.

If we assume that the modern mobile phone can be made reasonable secure, then it should be possible to make SIM cards more secure without the inconvenience of entering a SIM card PIN. In the computer world, for example, it is pretty common (in fact recommended) to do remote (SSH) login using only authentication keys without any user entered passwords. This works with a pair of encryption keys – the public key sits in the target machine and the private key in the source machine. A similar system should be possible with SIM cards as well, with the private key sitting on the mobile and backed up on other devices. Moving the SIM to another phone would not work unless the thief can also transfer the private key. Moreover, you would be required to use the backed up private key to make a request for a SIM replacement. This would keep SIM security completely in your hands and not in the hands of a telecom company that has no incentive to protect your SIM.

This system could be too complex for many users who use a phone only for voice and non critical communications. It could therefore be an opt-in system for those who use online banking and other services a lot and require higher degree of security. Financial services firms should also insist on the higher degree of security for high value transactions.

I am convinced that encryption is our best friend: it protects us against thieves who are adept at social engineering, against greedy corporations who are too careless about our security, and against overreaching governments. The only thing that you are counting on is that hopefully P ≠ NP.

Posted at 16:04 on Mon, 20 Oct 2014     View/Post Comments (2)     permanent link


Mon, 13 Oct 2014

What are banks for?

Much has been written since the Global Financial Crisis about how modern banking system has become less and less about financing productive investments and more and more about shuffling pieces of paper in speculative trading. Last month, Jordà, Schularick and Taylor wrote an NBER Working Paper “The Great Mortgaging: Housing Finance, Crises, and Business Cycles” describing an even more fundamental change in banking during the 20th century. They construct a database of bank credit in advanced economies from 1870 to 2011 and document “an explosion of mortgage lending to households in the last quarter of the 20th century”. They conclude that:

To a large extent the core business model of banks in advanced economies today resembles that of real estate funds: banks are borrowing (short) from the public and capital markets to invest (long) into assets linked to real estate.

Of course, it can be argued that mortgage lending is an economically useful activity to the extent that it allows people early in their career to buy houses. But it is also possible that much of this lending only boosts house prices and does not improve the affordability of houses to any significant extent.

The more important question is why banks have become less important in lending to businesses. One possible answer that in this traditional function, they have been disintermediated by capital markets. On the mortgages side, however, perhaps, banks are dominant only because they with their Too-Big-To-Fail (TBTF) subsidies can afford to take the tail risks that capital markets refuse to take.

I think the Jordà, Schularick and Taylor paper raises the fundamental question of whether advanced economies need banks at all. If regulators impose the kind of massive capital requirements that Admati and her coauthors have been advocating, and banks were forced to contract, capital markets might well step in to fill the void in the advanced economies. The situation might well be different in emerging economies.

Posted at 17:39 on Mon, 13 Oct 2014     View/Post Comments (1)     permanent link


Sun, 28 Sep 2014

Why does SEC think that 333,251 minus 5 times 66,421 is not 1,146?

The CME futures contracts on the S&P 500 index comes in two flavours – the big or full-size (SP) contract is five times the E-Mini (ES) contract. For clearing purposes, SP and ES contracts are fungible with a five to one ratio. The daily settlement price of both contracts is obtained by taking a volume weighted average price of both contracts taken together weighted in the same ratio.

Yet, according to a recent SEC order against Latour Trading LLC and Nicolas Niquet, a broker-dealer is required to maintain a net-capital on the two contracts separately. In Para 28 of its order, the SEC says that in February 2010, Latour held 333,251 long ES contracts and 66,421 short SP contracts, and it netted these out to a long position of 1,146 ES contracts requiring a net capital of $14,325. According to the SEC, these should not have been netted out and Latour should have held a net capital of $8.32 million ($4.17 million for the ES and $4.15 million for the SP). This is surely absurd.

It is not as if the SEC does not allow netting anywhere. It allows index products to be offset by qualified stock baskets (para 10). In other words, an approximate hedge (index versus an approximate basket) can be netted but an exact hedge (ES versus SP) cannot be netted.

PS: I am not defending Latour at all. The rest of the order makes clear that there was a great deal of incompetence and deliberate under-estimation of net capital going on. It is only on the ES/SP netting claim that I think the SEC regulations are unreasonable.

Posted at 21:43 on Sun, 28 Sep 2014     View/Post Comments (1)     permanent link


Mon, 22 Sep 2014

Outsourcing financial repression to China and insourcing it back

It is well known that financial repression more or less disappeared in advanced economies during the 1980s and 1990s, but has been making a comeback recently. Is it possible that financial repression did not actually disappear, but was simply outsourced to China? And the comeback that we are seeing after the Global Financial Crisis is simply a case of insourcing the repression back?

This thought occurred to me after reading an IMF Working Paper on “Sovereign Debt Composition in Advanced Economies: A Historical Perspective”. What this paper shows is that many of the nice things that happened to sovereign debt in advanced economies prior to the Global Financial Crisis was facilitated by the robust demand for this debt by foreign central banks. In fact, the authors refer to this period not as the Great Moderation, but as the Great Accumulation. Though they do not mention China specifically, it is clear that the Great Accumulation is driven to a great extent by China. It is also clear that much of the Chinese reserve accumulation is made possible by the enormous financial repression within that country.

This leads me to my hypothesis that just as the advanced economies outsourced their manufacturing to more efficient manufacturers in China, they outsourced their financial repression to the most efficient manufacturer of financial repression – China. Now that China is becoming a less efficient and less willing provider of financial repression, advanced economies are insourcing this job back to their own central banks.

In this view of things, we overestimated the global reduction of financial repression in the 1990s and are overestimating the rise in financial repression since the crisis.

Posted at 11:59 on Mon, 22 Sep 2014     View/Post Comments (0)     permanent link


Sat, 13 Sep 2014

Fama French and Momentum Factors: Updated Data Library for Indian

Market

A year ago, my colleagues, Prof. Sobhesh K. Agarwalla, Prof. Joshy Jacob and I created a publicly available data library providing the Fama-French and momentum factor returns for the Indian equity market, and promised to keep updating the data on a regular basis. It has taken a while to deliver on that promise, but we have now updated the data library. More importantly, we believe that we have now set up a process to do this on a sustainable basis by working together with the Centre for Monitoring Indian Economy (CMIE) who were the source of the data anyway. CMIE agreed to implement our algorithms on their servers and give us the data files every month. That ensures more comprehensive coverage of the data and faster updates.

Posted at 21:10 on Sat, 13 Sep 2014     View/Post Comments (1)     permanent link


Sun, 07 Sep 2014

A benchmark is to price what a credit rating agency is to quality

Andrew Verstein has an interesting paper on the Law and Economics of Benchmark Manipulation. One of the gems in that paper is the title of this blog post: “A benchmark is to price what a credit rating agency is to quality.” Verstein is saying that just as credit rating agencies became destructive when their ratings were hardwired into various legal requirements, benchmarks also become dangerous when they are hardwired into various legal documents.

Just as in the case of rating agencies, in the case of price benchmarks also, regulators have encouraged reliance on benchmarks. Even in the equity world where exchange trading eliminates the need for many kinds of benchmarks, the closing price is an important benchmark which derives its importance mainly from its regulatory use. Verstein points out that “Indeed, it is hard to find an example of stock price manipulation that does not target the closing (or opening) price.” So we have taken a liquid and transparent market and conjured an opaque and vulnerable benchmark out of it. Regulators surely take some of the blame for this unfortunate outcome.

Another of Verstein’s points is that governments use benchmarks even when they know that it is broken: “the United States Treasury used Libor to make TARP loans during the financial crisis, despite being on notice that Libor was a manipulated benchmark.” In this case, Libor was not only manipulated but had become completely dysfunctional – I remember that the popular definition of Libor at that time was that it was the rate at which banks do not lend to each other in London. That was well before Libor became Lie-bor. The US government could easily have taken a reference rate from the US Treasury market or repo markets and then set a fat enough spread over that reference rate (say 1000 basis points) to cover the TED spread, the CDS spread, and a Bagehotian penal spread. By choosing not to do so they lent legitimacy to what they knew very well was an illegitimate benchmark.

Posted at 19:57 on Sun, 07 Sep 2014     View/Post Comments (0)     permanent link


Tue, 02 Sep 2014

Regulatory overreach: SEBI definition of research analyst

Yesterday, the Securities and Exchange Board of India (SEBI) issued regulations requiring all Research Analysts to be registered with SEBI. The problem is that the regulations use a very expansive definition of research analyst. This reminds me of my note of dissent to the report of the Financial Sector Legislative Reforms Commission (FSLRC) on the issue of definition of financial service. I wrote in that dissent that:

Many activities carried out by accountants, lawyers, actuaries, academics and other professionals as part of their normal profession could attract the regitration requirement because these activities could be construed as provision of a financial service ... All this creates scope for needless harassment of innocent people without providing any worthwile benefits.

Much the same could be said about the definition of the definition of research analyst. Consider for example this blog post by Prof. Aswath Damodaran of the Stern School of Business at New York University on the valuation of Twitter during its IPO. It clearly meets the definition of a research report in Regulation 2(w):

any written or electronic communication that includes research analysis or research recommendation or an opinion concerning securities or public offer, providing a basis for investment decision

Regulation 2(w) has a long list of exclusions, but Damodaran’s post does not fall under any of them. Therefore, clearly Damodaran would be a research analyst under Regulation 2(u) under several of its prongs:

a person who is primarily responsible for:

with respect to securities that are listed or to be listed in a stock exchange

Under Regulation 3(1), Prof. Damodaran would need a certificate of registration from SEBI if he were to write a similar blog post about an Indian company. Or, under Regulation 4, he would have to tie up with a research entity registered in India.

Regulations of this kind are a form of regulatory overreach that must be prevented by narrowly circumscribing the powers of regulators in the statute itself. To quote another sentence that I wrote in the FSLRC dissent note: “regulatory self restraint ... is often a scarce commodity”.

Posted at 19:09 on Tue, 02 Sep 2014     View/Post Comments (2)     permanent link


Sat, 30 Aug 2014

IPO as call option for insiders

A couple of weeks ago, Matt Levine at Bloomberg View described a curious incident of a company that was a public company for only six days before cancelling its public issue:

  1. On July 30, 2014, an Israeli company, Vascular Biogenics Ltd. (VBL) announced that it had priced its initial public offering (IPO) at $12 per share and that the shares would begin trading on Nasdaq the next day. The registration statement relating to these securities was filed with and was declared effective by the US Securities and Exchange Commission (SEC) on the same day.
  2. On August 8, VBL announced that it had cancelled its IPO.

What happened in between was that on July 31, the shares opened at $11.00 and sank further to close at $10.25 (a 15% discount to the IPO price) on a large volume of 1.5 million shares as compared to the total issue size of 5.4 million shares excluding the Greeshoe option (Source for price and volume data is Google Finance). This price drop was bad news for one of the large shareholders who had agreed to purchase almost 45% of the shares in the IPO. This insider was unwilling or unable to pay for the shares that he had agreed to buy. Technically, the underwriters were on the hook now, and the default could have triggered a spate of law suits. Instead, the company cancelled the IPO and the underwriting agreement. Nasdaq instituted a trading halt but the company appears to be still technically listed on Nasdaq.

Matt Levine does a fabulous job of dissecting the underwriting agreement to understand the legal issues involved. I am however more concerned about the relationship between the insider and the company. The VBL episode seems to suggest that if you are an insider in a company, a US IPO is a free call option. If the stock price goes up on listing, the insider pays the IPO price and buys the stock. If the price goes down, the insider refuses to pay and the company cancels the IPO.

Posted at 18:50 on Sat, 30 Aug 2014     View/Post Comments (1)     permanent link


Sat, 23 Aug 2014

Mutual fund liquidity, valuation and gates

Last month, the US Securities and Exchange Commission (SEC) adopted rules allowing money market funds (MMFs) to restrict (or “gate”) redemptions when there is a liquidity problem. These proposals have been severely criticized on the ground that they could lead to pre-emptive runs as investor rush to the exit before the gates are imposed.

I think the criticism is valid though I was among those who recommended the imposition of gates in Indian mutual funds during the crisis of 2008. The difference is that I see gates as a solution not to a liquidity problem, but to a valuation problem. The purpose of the gate in my view is to protect remaining investors from the risk that redeeming investors exit the fund at a valuation greater than the true value of the assets. An even better solution to this valuation problem is the minimum balance at risk proposal that I blogged about two years ago.

Posted at 15:12 on Sat, 23 Aug 2014     View/Post Comments (0)     permanent link


Sat, 16 Aug 2014

Carry trades and the forward premium puzzle

Tarek Hassan and Rui Mano have an interesting NBER conference paper (h/t Econbrowser (Menzie Chinn) that comes pretty close to saying that there is really no forward premium puzzle at all. Their paper itself tends to obscure the message using phrases like cross-currency, between-time-and-currency, and cross-time components of uncovered interest parity violations. So what follows is my take on their paper.

Uncovered interest parity says that ignoring risk aversion, currencies with high interest rates should be expected to depreciate so as to neutralise the interest differential. If not risk neutral investors from the rest of the world would move all their money into the high yielding currency and earn higher returns. Similarly, currencies with low interest rates should be expected to appreciate to compensate the interest differential so that risk neutral investors do not stampede out of the currency.

Violation of uncovered interest parity therefore have a potentially simple explanation in terms of risk premia. The problem is that the empirical relationship between interest differentials and currency appreciation is in the opposite direction to that predicted by uncovered interest parity. In a pooled time-series cross-sectional regression, currencies with high interest rates appreciate instead of depreciating. A whole investment strategy called the carry trade has been built on this observation. A risk based explanation of this phenomenon would seem to require implausible time varying risk premia. For example, if we interpret the pooled in terms of a single exchange rate (say dollar-euro), the risk premium would have to keep changing sign depending on whether the dollar interest rate was higher or lower than the euro interest rate.

This is where Hassan and Mano come in with a decomposition of the pooled regression result. They argue that in a pooled sample, the result could be driven by currency fixed effects. For example, over their sample period, the New Zealand interest rate was consistently higher than the Japanese rate and an investor who was consistently short the yen and long the New Zealand dollar would have made money. The crucial point here is that a risk based explanation of this outcome would not require time varying risk premia – over the whole sample, the risk premium would be in one direction. What Hassan and Mano do not say is that a large risk premium would be highly plausible in this context. Japan is a net creditor nation and Japanese investors would require a higher expected return on the New Zealand dollar to take the currency risk of investing outside their country. At the same time, New Zealand is a net debtor country and borrowers there would pay a higher interest rate to borrow in their own currency than take the currency risk of borrowing in Japanese yen. It would be left to hedge funds and other players with substantial risk appetite to try and arbitrage this interest differential and earn the large risk premium on offer. Since the aggregate capital of these investors is quite small, the return differential is not fully arbitraged away.

Hasan and Mano show that empirically only the currency fixed effect is statistically significant. The time varying component of the uncovered interest parity violation within a fixed currency pair is not statistically significant. Nor is there a statistically significant time fixed effect related to the time varying interest differential between the US dollar and a basket of other currencies. To my mind, if there is no time varying risk premium to be explained, the forward premium puzzle disappears.

The paper goes on to show that the carry trade as an investment strategy is primarily about currency fixed effects. Hasan and Mano consider “a version of the carry trade in which we never update our portfolio. We weight currencies once, based on our expectation of the currencies’ future mean level of interest rates, and never change the portfolio thereafter.” This “static carry trade” strategy accounts for 70% of the profits of the dynamic carry trade that rebalances the portfolio each period to go long the highest yielding currencies at that time and go short the highest yielding currencies at that time. More importantly, in the carry trade portfolio, the higher yielding currencies do depreciate against the low yielding currencies. It is just that the depreciation is less than the interest differential and so the strategy makes money. So uncovered interest parity gets the sign right and only the magnitude of the effect is lower because of risk premium. There is a large literature showing that the carry trade loses money at times of global financial stress when investors can least afford to lose money and therefore a large risk premium is intuitively plausible.

Posted at 12:26 on Sat, 16 Aug 2014     View/Post Comments (0)     permanent link


Sat, 09 Aug 2014

Tax avoidance with derivatives

Last month, the Permanent Subcommittee on Investigations of the United States Senate published a Staff Report on how hedge funds were using basket options to reduce their tax liability. The hedge fund’s underlying trading strategy used 100,000 to 150,000 trades per day and many of those trading positions lasted only a few minutes. Yet, because of the use of basket options, the trading profits ended up being taxed at the long term capital gains rate of 15-20% instead of the short term capital gains rate of 35%. The hedge fund saved $6.8 billion in taxes during the period 2000-2013. Perhaps, more importantly, the hedge fund was also able to circumvent leverage restrictions.

The problem is that derivatives blur a number of distinctions that are at the foundation of the tax law everywhere in the world. Alvin Warren described the problem in great detail more than two decades ago (“Financial contract innovation and income tax policy.” Harvard Law Review, 107 (1993): 460). More importantly, Warren’s paper also showed that none of the obvious solutions to the problem would work.

We have similar problems in India as well. Mutual funds that invest at least 65% in equities produce income that is practically tax exempt for the investor, while debt mutual funds involve substantially higher tax incidence. A very popular product in India is the “Arbitrage Mutual Fund” which invests at least 65% in equities, but also hedges the equity risk using futures contracts. The result is “synthetic debt” that has the favourable tax treatment of equities.

In some sense, this is nothing new. In the Middle Ages, usury laws in Europe prohibited interest bearing debt, but allowed equity and insurance contracts. The market response was the infamous “triple contract” (contractus trinus) which used equity and insurance to create synthetic debt.

What modern taxmen are trying to do therefore reminds me of Einstein’s definition of insanity as doing the same thing over and over again and expecting different results.

Posted at 16:03 on Sat, 09 Aug 2014     View/Post Comments (1)     permanent link


Mon, 14 Jul 2014

Betting Against Beta in the Indian Market

My colleagues, Prof Sobhesh Kumar Agarwalla, Prof. Joshy Jacob, Mr. Ellapulli Vasudevan and I have written a working paper on “Betting Against Beta in the Indian Market” (also available at SSRN)

Recent empirical evidence from different markets suggests that the security market line is flatter than posited by CAPM and a market neutral portfolio long in low-beta assets and short in high-beta assets earns positive returns. Frazzini and Pedersen (2014) conceptualize a Betting against Beta (BAB) factor that tracks such a portfolio. They find that the BAB factor earns significant returns using data from 20 international equity markets, treasury bond markets, credit markets, and futures markets. We find that a similar BAB factor earns significant positive returns in the Indian equity market. The returns on the BAB factor dominate the returns on the size, value and momentum factors. We also find that stocks with higher volatility earn relatively lower returns. These findings are consistent with the Frazzini and Pedersen model in which many investors do not have access to leverage and therefore overweight the high-beta assets to achieve their target return.

Like our earlier work on the Fama-French and momentum factor returns in India (see this blog post), this study also contributes to an understanding of the cross section of equity returns in India. Incidentally, the long promised update of the Fama-French and momentum factor returns is coming soon. We wanted to put the data update process on a more sound foundation and that has taken time. While the update has been delayed, we expect it to be more reliable as a result.

Posted at 13:30 on Mon, 14 Jul 2014     View/Post Comments (4)     permanent link


Thu, 19 Jun 2014

Making margin models less procyclical

Last month, the Bank of England (BOE) published a Financial Stability Paper entitled “An investigation into the procyclicality of risk-based initial margin models”. After the Global Financial Crisis, there has been growing concern that procyclical margin requirements (margins are higher in times of market stress and lower in calm markets) induce complacency in good times and panic in bad times. There is therefore a desire to reduce procyclicality, but this is difficult to do without sacrificing the risk sensitivity of the margin system.

The BOE paper uses historical and simulated data to compare various margin models on their risk sensitivity and their procyclicality. Though they do not state this as a conclusion, their comparison does show that the exponentially weighted moving average (EWMA) model with a floor (minimum margin) is one of the better performing models on both risk sensitivity and procyclicality. This is gratifying in that India uses a system of this kind.

However, the study leaves me quite dissatisfied. First procyclicality is measured in terms of elevated realized volatility. Market stress in my view is better measured by implied volatility (for example, the VIX) and by measures of funding liquidity. Second, the four models that the paper compares are all standard pre-crisis models. Even when they use simulated data from a regime switching model, they do not consider margin model based on regime switching. Nor do they consider models based on fat tailed distributions. There are no models that adjust margins slowly to reduce liquidity stresses in the system. Finally, they do little to quantify the tradeoff between risk sensitivity and procyclicality – how much risk sensitivity do we have to give up to achieve a desired reduction in procyclicality.

Posted at 14:16 on Thu, 19 Jun 2014     View/Post Comments (0)     permanent link


Wed, 18 Jun 2014

How to borrow $10 million against forged shares

James Altucher narrates a fascinating story about how a guy claiming to be related to Middle Eastern royalty almost succeeded in borrowing $10 million from a fund manager against forged shares representing $25 million of restricted stock of a private internet company (h/t Bruce Schneier).

To me the red flag in the story was that the borrower agreed without a murmur to the outrageous terms that the fund manager asked for:

Assuming that the loan is for all practical purposes without recourse to any other assets of the borrower because of the uncertainties of local law, all this can be valued using call and put options on the stock. The upside clause is just 25% of an at-the-money call option on the stock. The default loss is just the value of a put with a strike of $10 million. To discount the interest payments, we need the risk neutral probability of default which I conservatively estimate as the probability of exercise of the two year put option (In fact, the interest is paid quarterly and some interest payments will be received even if the loan ultimately defaults).

For simplicity, I assume the risk free rate to be zero which is realistic for the first two years, but probably undervalues the ten year call. To add to the conservatism, I assume that the volatility of the stock is 100% for the first two years (life of the loan) and drops sharply to 30% for the remaining life of the ten year period of the call option. Taking the square root of the weighted average variance gives the volatility of the call option to be 52%. Since it is an internet stock, one can safely assume that the dividends are zero.

Under these assumptions, the fund manager expects to lose $3 million (put option value) out of the $10 million loan, but expects to make $3.7 million on the call, $1.4 million in interest and $0.6 million upfront fee. That is a net gain of $2.7 million or 27%. If the short term volatility is reduced to 50%, the default loss drops to less than $0.5 million and the net gain rises to 52%. Even if the short term volatility is raised to 160% (without raising the long term volatility), the deal still breaks even.

If a deal looks too good to be true, it usually is. The fund manager should have got suspicious right there.

As an aside, forged shares were a big menace in India in the 1990s, but we have solved that problem by dematerialization. (It is standard while lending against shares in India to ask for the shares to be dematerialized before being pledged.) The Altucher story suggests that the US still has the forged share problem.

Posted at 14:56 on Wed, 18 Jun 2014     View/Post Comments (1)     permanent link


Sat, 14 Jun 2014

19th century UK gilts mispricing versus modern on-the-run bond

pricing

Andrew Odlyzko has an interesting paper entitled “Economically irrational pricing of 19th century British government bonds ” (available on SSRN) which demonstrates that more liquid perpetual bonds (consols) issued by the UK government often traded at prices about 1% higher than less liquid bonds with almost identical cash flows. Given that interest rates in that era were around 3%, these perpetual bonds would have a duration of well over 30 years. So the 1% pricing disparity would correspond to a yield differential of about 3 basis points. That is much less than the yield differential between long maturity on-the-run and off-the-run treasuries in the US in recent decades, let alone the differentials in the Indian gilt market.

In other words, contrary to what Odlyzko seems to imply, the 19th century UK gilt market would appear to have been more efficient than modern government bond markets! Odlyzko provides a solution to this puzzle. Most of UK consols in the 19th century were held by retail investors and very little was held by financial institutions. As Odlyzko rightly points out, this would substantially depress the premium for liquidity. Odlyzko argues that the liquidity premium should be zero because the stock of the liquid consols was more than adequate to meet any reasonable liquidity demands. I do not agree with this claim. The experience with quantitative easing since the global financial crisis tells us that the demand for safe and liquid assets can be almost insatiable. That might well have been true two centuries ago.

Posted at 22:03 on Sat, 14 Jun 2014     View/Post Comments (0)     permanent link


Thu, 24 Apr 2014

Waiting for a national stock market in India

Today was another reminder that India still does not have a national stock market. The Indian stock markets are closed because Mumbai goes to the poll today. The country as a whole goes to the polls on ten different days spread over more than a month. Either the stock market should be closed on ten days or on none.

It is high time that the regulators required that the exchanges should operate out of their disaster recovery location when Mumbai has a holiday and most of the country is working. That would also be a wonderful way of testing whether all those business continuity plans work as nicely on the ground as they do on paper. But something tells me that this is unlikely to happen anytime soon

Two decades ago, we abolished the physical trading floor in Mumbai. But the trading floor in Mumbai lives on in the minds of key decision makers, and it will take long to liberate ourselves from the oppression of this imaginary trading floor.

Posted at 18:13 on Thu, 24 Apr 2014     View/Post Comments (4)     permanent link


Tue, 15 Apr 2014

The human rights of insider traders

The European Court of Human Rights (ECHR) has an interesting judgement (h/t June Rhee) upholding the human rights of those guilty of insider trading (The judgement itself is available only in French but the Press Release is available in English).

Though the fines and penalties imposed by the Italian Companies and Stock Exchange Commission (Consob) were formally defined as administrative in nature under Italian law, the ECHR ruled that “the severity of the fines imposed on the applicants meant that they were criminal in nature.”. As such, the ECHR found fault with the procedures followed by Consob. For example, the accused had not had an opportunity to question any individuals who could have been interviewed by Consob. Moreover, the functions of investigation and judgement were within the same institution reporting to the same president. The only thing that helped Consob was that the accused could and did challenge the Consob ruling in the Italian courts.

The ECHR ruling that the Consob fines were a criminal penalty brought into play the important principle that a person cannot be tried for the same offence twice. Under Italian law (based on the EC Market Abuse Directive), a criminal prosecution had taken place in addition to the Consob fines. ECHR ruled that this violated the human rights of the accused.

It is important to recognize that the ECHR is not objecting to the substance of the insider trading statutes and the need to penalize the alleged offences. The Court clearly states that the regulations are “intended to guarantee the integrity of the financial markets and to maintain public confidence in the security of transactions, which undeniably amounted to an aim that was in the public interest. ... Accordingly, the fines imposed on the applicants, while severe, did not appear disproportionate in view of the conduct with which they had been charged.” Rather, the Court’s concerns are about due process of law and the protection of the rights to fair trial.

I think the principles of human rights are broadly similar across the free world – US, Europe and India. The judgement therefore raises important issues that go far beyond Italy.

Posted at 11:03 on Tue, 15 Apr 2014     View/Post Comments (2)     permanent link


Sat, 12 Apr 2014

Heartbleed and the need for air-gapped backups in finance

Heartbleed is perhaps the most catastrophic computer security disaster ever (For those not technically inclined, this xkcd comic is perhaps the most readable explanation of the bug). Bruce Schneier says that “On the scale of 1 to 10, this is an 11.” Since the bug has been around for a few years and the exploit leaves no trace on the server, the assumption has to be that passwords and private keys have been stolen from every server that was ever vulnerable. If you have the private key, you can read everything that is being sent to or received from the server until the private key (SSL Certificate) is changed even if the vulnerability itself has been fixed.

Many popular email, social media and other popular sites are affected and we need to change our passwords everywhere. Over the next few weeks, I intend to change every single password that I am using on the web – more than a hundred of them.

Thankfully, only a few banking sites globally seem to be affected. When I check now, none of the Indian banking sites that I use regularly are being reported as vulnerable. However, the banks have not said anything officially and I am not sure whether they were never vulnerable or whether they fixed the vulnerability over the last few days after the bug was revealed. Even the RBI has been silent on this; if all Indian banks were safe, they should publicly say so, and if some were affected and have been fixed, they should say so too. Incidentally, many Indian banking sites do not seem to implement Perfect Forward Security and that is not good at all.

More importantly, I think it is only a matter of time before large financial institutions around the world suffer a catastrophic security breach. Even if the mathematics of cryptography is robust (P ≠ NP), all the mathematics is implemented in code that often goes through only flimsy code reviews. I think it is necessary to have offline repositories of critical financial data so that one disastrous hack does not destroy the entire financial system. For example, I think every large depository, bank, mutual fund and insurance company should create a monthly backup of the entire database in a secure air-gapped location. Just connect a huge storage rack to the server (or perhaps the disaster recovery backup server), dump everything (encrypted) on the rack, disconnect and remove the rack, and store the air-gapped rack in a secure facility. A few thousands of dollars or even a few tens of thousands of dollars a month is a price that each of these institutions should be willing to pay for partial protection against the tail risk of an irrecoverable security breach.

Posted at 19:04 on Sat, 12 Apr 2014     View/Post Comments (0)     permanent link


Sat, 05 Apr 2014

Campbell on 2013 Economics Nobel Prizes

While much has been written about the 2013 Economics Nobel Prizes, almost everybody has focused on the disagreements between Fama and Shiller, with Hansen mentioned (if at all) as an afterthought (Asness and Lieuw is a good example). By contrast, John Campbell has a paper (h/t Justin Fox) on the 2013 Nobels for the Scandinavian Journal of Economics, in which Hansen appears as the chief protagonist, while Fama and Shiller play supporting roles. The very title of the paper (“Empirical Asset Pricing”) indicates the difference in emphasis – market efficiency and irrational exuberance play second fiddle to Hansen’s GMM methodology.

To finance people like me, this comes as a shock; Fama and Shiller are people in “our field” while Hansen is an “outsider” (a mere economist, not even a financial economist). Yet on deeper reflection, it is hard to disagree with Campbell’s unstated but barely concealed assessment: while Fama and Shiller are story tellers par excellence, Hansen stands on a different pedestal when it comes to rigour and mathematical elegance.

And even if you have no interest in personalities, I would still strongly recommend Campbell’s paper – it is by far, the best 30 page introduction to Empirical Asset Pricing that I have seen.

Posted at 16:56 on Sat, 05 Apr 2014     View/Post Comments (2)     permanent link




Optimized for modern standards-compliant browsers like IE 7+ and Firefox 2+