To a Bayesian, almost everything is informative and therefore relevant. This means that the Independence of Irrelevant Alternatives axiom is rarely applicable.
A good illustration is provided by the Joint Staff Report on “The U.S. Treasury Market on October 15, 2014”. On that day, in the narrow window between 9:33 and 9:45 a.m. ET, the benchmark 10-year US Treasury yield experienced a 16-basis-point drop and then rebounded to return to its previous level. The impact of apparently irrelevant alternatives is described in the Staff Report as follows:
Around 9:39 ET, the sudden visibility of certain sell limit orders in the futures market seemed to have coincided with the reversal in prices. Recall that only 10 levels of order prices above and below the best bid and ask price are visible to futures market participants. Around 9:39 ET, with prices still moving higher, a number of previously posted large sell orders suddenly became visible in the order book above the current 30-year futures price (as well as in smaller size in 10-year futures). The sudden visibility of these sell orders significantly shifted the visible order imbalance in that contract, and it coincided with the beginning of the reversal of its price (the top of the price spike). Most of these limit orders were not executed, as the price did not rise to their levels.
In other words, traders (and trading algorithms) saw some sell orders which were apparently irrelevant (nobody bought from these sellers at those prices), but this irrelevant alternative caused the traders to change their choice between two other alternatives. Consider a purely illustrative example: just before 9:39 am, traders faced the choice between buying a modest quantity at a price of say 130.05 and selling a modest quantity at a price of 129.95. They were choosing to buy at 130.05. At 9:39, they find that there is a new alternative: they can buy a larger quantity at a price of say 130.25. They do not choose this new alternative, but they change their earlier choice from buying at 130.05 to selling at 129.95. This is the behaviour that is ruled out by the axiom of the Independence of Irrelevant Alternatives.
But if one thinks about the matter carefully, there is nothing irrational about this behaviour at all. At 8:30 am, the market had seen the release of somewhat weaker-than-expected US retail sales data. Many traders interpreted this as a memo that the US economy was weak and needed low interest rates for a longer period. Since low interest rates imply higher bond prices, traders started buying bonds. At 9:39, they see large sell orders for the first time. They realize that many large investors did not receive this memo, or may be received a different memo. They think that their interpretation of the retail sales data might have been wrong and that they had possibly over reacted. They reverse the buying that they had done in the last few minutes.
In fact,the behaviour of the US Treasury markets on October 15 appears to me to be an instance of reasonably rational behaviour. Much of the action in those critical minutes was driven by algorithms which appear to have behaved rationally. With no adrenalin and testosterone flowing through their silicon brains, they could evaluate the new information in a rational Bayesian manner and quickly reverse course. The Staff Report says that human market makers stopped making markets, but the algorithms continued to provide liquidity and maintained an orderly market.
I expected the Staff Report to recommend that in the futures markets, the entire order book (and not just the best 10 levels) should be visible to all participants at all times. Given current computing power and communication bandwidth, there is no justification for sticking to this anachronistic practice of providing only limited information to the market. Surprisingly, the US authorities do not make this sensible recommendation because they fail to see the highly rational market response to newly visible orders. Perhaps their minds have been so conditioned by the Independence of Irrelevant Alternatives axiom, that they are blind to any other interpretation of the data. Axioms of rationality are very powerful even when they are wrong.
Sun, 12 Jul 2015
In response to my blog post of a few days back on regulating crowd funding, my colleague Prof. Joshy Jacob writes in the comments:
I agree broadly with all the arguments in the blog post. I would like to add the following.
If tapping the crowd wisdom on the product potential is the essence of crowdfunding, substituting that substantially with equity crowdfunding may not be a very good idea. While the donation based crowdfunding generates a sense of the product potential by way of the backings, the equity crowdfunding by financiers would not give the same, as their judgments still need to be based on the crowd wisdom. Is it possible to create a sequential structure involving donation based crowdfunding and equity based crowdfunding?
Unlike most other forms of financing, the judgement in crowdfunding is often done sitting far away, without meeting the founders, devoid of financial numbers, and therefore almost entirely based on the campaign material posted. This intimately links the central role of the campaign success to the nature of the promotional material and endorsements by influential individuals. Evolving a role model for the multimedia campaigns would be appropriate, given the ample evidences on behavioral biases in retail investor decision making.
Both these are valid points that the regulator should take into account. However, I would worry a bit about people gaming the system. For example, if the regulator says that a successful donation crowdfunding is a prerequisite for equity crowdfunding, there is a risk that entrepreneurs will get their friends and relatives to back the project in a donation campaign. It is true that angels and venture capitalists rely on crowdfunding campaign success as a metric of project viability, but I presume that they would have a slightly greater ability to detect such gaming than the crowd.
Mon, 06 Jul 2015
Many jurisdictions are struggling with the problem of regulating crowd funding. In India also, the Securities and Exchange Board of India issued a consultation paper on the subject a year ago.
I believe that there are two key differences between crowd funding and other forms of capital raising that call for quite novel regulatory approaches.
Crowd funding is for the crowd and not for the Wall Street establishment. There is a danger that if the regulators listen too much to the Wall Street establishment, they will produce something like a second tier stock market with somewhat diluted versions of a normal public issue. The purpose of crowd funding is different – it is to tap the wisdom of crowds. Crowd funding should attract people who have a passion for (and possibly expertise in) the product. Any attempt to attract those with expertise in finance instead of the product market would make a mockery of crowd funding.
The biggest danger that the crowd funding investor faces is not exploitation by the promoter today, but exploitation by the Series A venture capitalist tomorrow. Most genuine entrepreneurs believe in doing well for their crowd fund backers. After all, they share the same passion. Everything changes when the venture capitalist steps in. We have plenty of experience with venture capitalists squeezing out even relatively sophisticated angel investors. The typical crowd funding investor is a sitting duck by comparison.
What do these two differences imply for the regulator?
A focus on accredited investors would be a big mistake when it comes to crowd funding. These accredited investors will look for all the paraphernalia that they are accustomed to in ordinary equity issues – prospectuses, financial data and the like.
The target investor in a technology related crowd funding in India might in fact be a young software professional in Bangalore who is an absolute nerd when it comes to the product, but has difficulty distinguishing an equity share from a convertible preference share.
Disclosure should be focused on the people and the product. Financial data is meaningless and irrelevant. As in donation crowd funding, the disclosure will not be textual in nature, but will use rich multimedia to communicate soft information more effectively.
Equity crowd funding should be more like donation crowd funding with equity securities being one of the rewards. This implies that the vast majority of investors should be investing tiny amounts of money – the sort of money that one may spend on a dinner at a good restaurant. It should be money that one can afford to lose. In fact, it should be money that one expects to lose. Close to half of all startups probably fail and one should expect similar failure rates here as well.
If such small amounts of money are involved, transaction costs have to be very low. No regulatory scheme is acceptable if it will not work for small investments of say USD 50-100 in developed markets and much lower in emerging markets (say INR 500-1000 in India).
Some regulatory mechanisms need to be created for protecting the crowd in future negotiations with angels, venture capitalists and strategic buyers. Apart from some basic anti dilution rights, we need some intermediary (similar to the debenture trustee in debt issues) who can act on behalf of all investors to prevent them from being short changed in these negotiations. Going even further, perhaps even something similar to appraisal rights could be considered.
Regulatory staff who work on crowd funding regulations should be required to spend several hours watching donation crowd funding campaigns on platforms like Kickstarter and Indiegogo to develop a better appreciation of the activity that they are trying to regulate.
In the spirit of crowd sourcing, I would like to hear in the comments on what a good equity crowd funding market should look like and how it should be regulated. Interesting comments may be hoisted out of the comments into a subsequent blog post.
Thu, 02 Jul 2015
The following posts appeared on the sister blog (on Computing) last month.
Wed, 01 Jul 2015
After careful thought, I now think that it is a bad idea to mandate that regulated entities should store and retain records of all digital communications by their employees. Juicy emails and instant messages have been the most interesting element in many prosecutions including those relating to the Libor scandal and to foreign exchange rigging. Surely it is a good thing to force companies to retain these records for the convenience of prosecutors.
The problem is that today we use things like instant messaging where we would earlier have had an oral conversation. And there was no requirement to record these oral conversations (unless they took place inside specified locations like the trading room). The power of digital communications is that they transcend geographical boundaries. The great benefit of these technologies is that an employee sitting in India is able (in a virtual sense) to take part in a conversation happening around a coffee machine in the New York or London office.
Electronic communications can potentially be a great leveller that equalizes opportunities for employees in the centre and in the periphery. In the past, many jobs had to be in London or New York so that the employees could be tuned in to the office gossip and absorb the soft information that did not flow through formal channels. If we allowed a virtual chat room that spans the whole world, then the jobs too could be spread around the world. This potential is destroyed by the requirement that conversations in virtual chat rooms should be stored and archived while conversations in physical chat rooms can remain ephemeral and unrecorded. Real gossip will remain in the physical chat rooms and the jobs will also remain within earshot of these rooms.
India as a member of the G20 now has a voice in global regulatory organizations like IOSCO and BIS. Perhaps it should raise its voice in these fora to provide regulatory space for ephemeral digital communications that securely destroy themselves periodically.
Wed, 24 Jun 2015
Canayaz, Martinez and Ozsoylev have a nice paper showing that the pernicious effect of the revolving door (at least in the US) is largely about government employees favouring their future private sector employers. It is not so much about government employees favouring their past private sector employers or about former government employees influencing their former colleagues in the government to favour their current private sector employers.
Their methodology relies largely on measuring the stock market performance of the private sector companies whose employees have gone through the revolving door (in either direction) and comparing these returns with a control group of companies which have not used the revolving door. The abnormal returns are computed using the Fama-French-Carhart four factor model.
The advantage of the methodology is that it avoids subjective judgements about whether for example, US Treasury Secretary Hank Paulson favoured his former employer, Goldman Sachs, during the financial crisis of 2008. It also avoids having to identify the specific favours that were done. The sample size also appears to be reasonably large – they have 23 years of data (1990-2012) and an average of 62 revolvers worked in publicly traded firms each year.
The negative findings in the paper are especially interesting, and if true could make it easy to police the revolving door. All that is required is a rule that when a (former) government employee joins the private sector, a special audit would be carried out of all decisions by the government employee during the past couple of years that might have provided favours to the prospective private sector employer. In particular, the resistance in India to hiring private sector professionals to important government positions (because they might favour their former employer) would appear to be misplaced.
One weakness in the methodology is that companies which anticipate financial distress in the immediate future might hire former government employees to help them lobby for some form of bail out. This might ensure that though their stock price declines due to the distress, it does not decline as much as it would otherwise have done. The excess return methodology would not however show any gain from hiring the revolver because the Fama French excess returns would be negative rather than positive. Similarly, companies which anticipate financial distress might make steps (for example, campaign contributions) that make it more likely that their employees are recruited into key government positions. Again, the excess return methodology would not pick up the resulting benefit.
Just in case you are wondering what all this has to do with a finance blog, the paper says that “[t]he financial industry, ... is a substantial employer of revolvers, giving jobs to twice as many revolvers as any other industry.” (Incidentally, Table A1 in their paper shows that including or excluding financial industry in the sample makes no difference to their key findings). And of course, the methodology is pure finance, and shows how much information can be gleaned from a rigorous examination of asset prices.
Thu, 18 Jun 2015
On Monday, the Basel Committee on Banking Supervision published its Regulatory Consistency Assessment Programme (RCAP) Assessment of India’s implementation of Basel III risk-based capital regulations. While the RCAP Assessment Team assessed India as compliant with the minimum Basel capital standards, they had a problem with the Indian use of the word “may” where the rest of the world uses “must”:
The team identified an overarching issue regarding the use of the word “may” in India’s regulatory documents for implementing binding minimum requirements. The team considers linguistic clarity of overarching importance, and would recommend the Indian authorities to use the word “must” in line with international practice. More generally, authorities should seek to ensure that local regulatory documents can be unambiguously understood even in an international context, in particular where these apply to internationally active banks. The issue has been listed for further reflection by the Basel Committee. As implementation of Basel standards progresses, increased attention to linguistic clarity seems imperative for a consistent and harmonised transposition of Basel standards across the member jurisdiction.
Section 2.7 lists over a dozen instances of such usage of the word “may”. For example:
Basel III paragraph 149 states that banks “must” ensure that their CCCB requirements are calculated and publicly disclosed with at least the same frequency as their minimum capital requirements. The RBI guidelines state that CCCB requirements “may” be disclosed at table DF-11 of Annex 18 as indicated in the Basel III Master Circular.
Ultimately, the RCAP Assessment Team adopted a pragmatic approach of reporting this issue as an observation rather than a finding. They were no doubt swayed by the fact that:
Senior representatives of several Indian banks unequivocally confirmed to the team during the on-site discussions that there is no doubt that the intended meaning of “may” in Indian banking regulations is “shall” or “must” (except where qualified by the phrase “may, at the discretion of” or similar terms).
The Indian response to the RCAP Assessment argues that “may” is perfectly appropriate in the Indian context.
RBI strongly believes that communication, including regulatory communications, in order to be effective, must necessarily follow the linguistics and social characteristics of the language used in the region (Indian English in this case), which is rooted in the traditions and customs of the jurisdiction concerned. What therefore matters is how the regulatory communications have been understood and interpreted by the regulated entities. Specific to India, the use of word “may” in regulations is understood contextually and construed as binding where there is no qualifying text to convey optionality. We are happy that the Assessment Team has appreciated this point.
I tend to look at this whole linguistic analysis in terms of the suits versus geeks divide. It is true that in Indian banking, most of the suits would agree that when RBI says “may” it means “must”. But increasingly in modern finance, the suits do not matter as much as the geeks. In fact, humans matter less than the computers and the algorithms that they execute. I like to joke that in modern finance the humans get to decide the interesting things like when to have a tea break, while the computers decide the important things like when to buy and sell.
For any geek worth her salt, the bible on the subject of “may” and “must” is RFC 2119 which states that “must” means that the item is an absolute requirement; “should” means that there may exist valid reasons in particular circumstances to ignore a particular item; “may” means that an item is truly optional. I will let Arnold Kling have the last word: “Suits with low geek quotients are dangerous”.
Wed, 17 Jun 2015
My long vacation provided the ideal opportunity to reflect on the large number of comments that I received on my last blog post about the tenth anniversary of my blog. These comments convinced me that I should not only keep my blog going but also try to engage more effectively with my readers. Over the next few weeks and months, I intend to implement many of the excellent suggestions that you have given me.
First of all, I have set up a Facebook page for this blog. This post and all future blog posts will appear on that page so that readers can follow the blog from there as well. My blog posts have been on twitter for over six years now and this will continue.
Second, I have started a new blog on computing with its own Facebook page which will over a period of time be backed up by a GitHub presence. I did not want to dilute the focus of this blog on financial markets and therefore decided that a separate blog was the best route to take. At the end of every month, I intend to post on each blog a list of posts on the sister blog, but otherwise this blog will not be contaminated by my meanderings in fields removed from financial markets.
Third, I will be experimenting with different kinds of posts that I have not done so far. This will be a slow process of learning and you might not observe any difference for many months.
Sat, 28 Mar 2015
My blog reaches its tenth anniversary tomorrow: over ten years, I have published 572 blog posts at a frequency of approximately once a week.
My first genuine blog post (not counting a test post and a “coming soon” post) on March 29, 2005 was about an Argentine creditor (NML Capital) trying to persuade a US federal judge (Thomas Griesa) to attach some bonds issued by Argentina. The idea that a debtor’s liabilities (rather than its assets) could be attached struck me as funny. Ten years on, NML and Argentina are still battling it out before Judge Griesa, but things have moved from the comic to the tragic (at least from the Argentine point of view).
The most fruitful period for my blog (as for many other blogs) was the global financial crisis and its aftermath. The blog posts and the many insightful comments that my readers posted on the blog were the principal vehicle through which I tried to understand the crisis and to formulate my own views about it. During the last year or so, things have become less exciting. The blogosphere has also become a lot more crowded than it was when I began. Many times, I find myself abandoning a potential blog post because so many others have already blogged about it.
When I look back at the best bloggers that I followed in the mid and late 2000s, some have quit blogging because they found that they no longer had enough interesting things to say; a few have sold out to commercial organizations that turned these blogs into clickbaits; at least one blogger has died; some blogs have gradually declined in relevance and quality; and only a tiny fraction have remained worthwhile blogs to read.
The tenth anniversary is therefore less an occasion for celebration, and more a reminder of senescence and impending mortality for a blog. I am convinced that I must either reinvent my blog or quit blogging. April and May are the months during which I take a long vacation (both from my day job and from my blogging). That gives me enough time to think about it and decide.
If you have some thoughts and suggestions on what I should do with my blog, please use the comments page to let me know.
Sat, 28 Feb 2015
Very simple. Describe them as your greatest resource!
In my last blog post, I pointed out that the Carbanak/Anunak hack was mainly due to the recklessness of the banks’ own employees and system administrators. Now that they are aware of this, banks have to disclose this as another risk factor in their regulatory filings. Here is how one well known US bank made this disclosure in their Form 10K (page 39) last week (h/t the ever diligent Footnoted.com):
We are regularly the target of attempted cyber attacks, including denial-of-service attacks, and must continuously monitor and develop our systems to protect our technology infrastructure and data from misappropriation or corruption.
Notwithstanding the proliferation of technology and technology-based risk and control systems, our businesses ultimately rely on human beings as our greatest resource, and from time-to-time, they make mistakes that are not always caught immediately by our technological processes or by our other procedures which are intended to prevent and detect such errors. These can include calculation errors, mistakes in addressing emails, errors in software development or implementation, or simple errors in judgment. We strive to eliminate such human errors through training, supervision, technology and by redundant processes and controls. Human errors, even if promptly discovered and remediated, can result in material losses and liabilities for the firm.
Sun, 22 Feb 2015
There were a spate of press reports a week back about a group of hackers (referred to as the Carbanak or Anunak group) who had stolen nearly a billion dollars from close to a hundred different banks and financial institutions from around the world. I got around to reading the technical reports about the hack only now: the Kaspersky report and blog post as well as the Group-IB/Fox-IT report of December 2014 and their recent update. A couple of blog posts by Brian Krebs also helped.
The two technical analyses differ on a few details: Kaspersky suggests that the hackers had a Chinese connection while Group-IB/Fox-IT suggests that they were Russian. Kaspersky also seems to have had access to some evidence discovered by law enforcement agencies (including files on the servers used by the hackers). Group-IB/Fox-IT talk only about Russian banks as the victims while Kaspersky reveals that some US based banks were also hacked. But by and large the two reports tell a similar story.
The hackers did not resort to the obvious ways of skimming money from a bank. To steal money from an ATM, they did not steal customer ATM cards or PIN numbers. Nor did they tamper with the ATM itself. Instead they hacked into the personal computers of bank staff including system administrators and used these hacked machines to send instructions to the ATM using the banks’ ATM infrastructure management software. For example, an ATM uses Windows registry keys to determine which tray of cash contains 100 ruble notes and which contains 5000 ruble notes. For example, the CASH_DISPENSER registry key might have VALUE_1 set to 5000 and VALUE_4 set to 100. A system administrator can change these settings to tell the ATM that the cash has been loaded into different bins by setting VALUE_1 to 100 and VALUE_4 to 5000 and restarting Windows to let the new values take effect. The hackers did precisely that (using the system administrators’ hacked PCs) so that the ATM which thinks it is dispensing 1000 rubles in the form of ten 100 ruble notes would actually dispense 50,000 rubles (ten 5000 ruble notes).
Similarly, an ATM has a debug functionality to allow a technician to test the functioning of the ATM. With the ATM vault door open, a technician could issue a command to the ATM to dispense a specified amount of cash. There is no hazard here because with the vault door open, the technician anyway has access to the whole cash without issuing any command. With access to the system administrators’ machines, the hackers simply deleted the piece of code that checked whether the vault door was open. All that they needed to do was to have a mole stand in front of the ATM when they issued a command to the ATM to dispense a large amount of cash.
Of course, ATMs were not the only way to steal money. Online fund transfer systems could be used to transfer funds to accounts owned by the hackers. Since the hackers had compromised the administrators’ accounts, they had no difficulty getting the banks to transfer the money. The only problem was to prevent the money from being traced back to the hackers after the fraud was discovered. This was achieved by using several layers of legal entities before being loaded into hundreds of credit cards which had been prepared in advance.
It is a very effective way to steal money, but it requires a lot of patience. “The average time from the moment of penetration into the financial institutions internal network till successful theft is 42 days.” Using emails with malicious attachments to hack a bank employee’s computer, the hackers patiently worked their way laterally infecting the machines of other employees until they succeeded in compromising a system administrator’s machine. Then they collected data patiently about the banks’ internal systems using screenshots and videos sent from the administrator’s machines by the hackers’ malware. Once they understood the internal systems well, they could use the systems to steal money.
The lesson for banks and financial institutions is that it is not enough to ensure that the core computer systems are defended in depth. The Snowden episode showed that the most advanced intelligence agencies in the world are vulnerable to subversion by their own administrators. The Carbanak/Anunak incident shows that well defended bank systems are vulnerable to the recklessness of their own employees and system administrators using unpatched Windows computers and carelessly clicking on malicious email attachments.
Thu, 19 Feb 2015
Loss aversion is a basic tenet of behavioural finance, particularly prospect theory. It says that people are averse to losses and become risk seeking when confronted with certain losses. There is a huge amount of experimental evidence in support of loss aversion, and Daniel Kahneman won the Nobel Prize in Economics mainly for his work in prospect theory.
What are the implications of prospect theory for an economy with pervasive negative interest rates? As I write, German bund yields are negative up to a maturity of five years. Swiss yields are negative out to eight years (until a few days back, it was negative even at the ten year maturity). France, Denmark, Belgium and Netherlands also have negative yields out to at least three years.
A negative interest rate represents a certain loss to the investor. If loss aversion is as pervasive in the real world as it is in the laboratory, then investors should be willing to accept an even more negative expected return in risky assets if these risky assets offer a good chance of avoiding the certain loss. For example, if the expected return on stocks is -1.5% with a volatility of 15%, then there is a 41% chance that the stock market return is positive over a five year horizon (assuming a normal distribution). If the interest rate is -0.5%, a person with sufficiently strong loss aversion would prefer the 59% chance of loss in the stock market to the 100% chance of loss in the bond market. Note that this is the case even though the expected return on stocks in this example is less than that on bonds. As loss averse investors flee from bonds to stocks, the expected return on stocks should fall and we should have a negative equity risk premium. If there are any neo-classical investors in the economy who do not conform to prospect theory, they would of course see this as a bubble in the equity market; but if laboratory evidence extends to the real world, there would not be many of them.
The second consequence would be that we would see a flipping of the investor clientele in equity and bond markets. Before rates went negative, the bond market would have been dominated by the most loss averse investors. These highly loss averse investors should be the first to flee to the stock markets. At the same time, it should be the least loss averse investors who would be tempted by the higher expected return on bonds (-0.5%) than on stocks (-1.5%) and would move into bonds overcoming their (relatively low) loss aversion. During the regime of positive interest rates and positive equity risk premium, the investors with low loss aversion would all have been in the equity market, but they would now all switch to bonds. This is the flipping that we would observe: those who used to be in equities will now be in bonds, and those who used to be in bonds will now be in equities.
This predicted flipping is a testable hypothesis. Examination of the investor clienteles in equity and bond markets before and after a transition to negative interest rates will allow us to test whether prospect theory has observable macro consequences.
Wed, 04 Feb 2015
Yesterday, the Reserve Bank of India did retail depositors a favour: it announced that it would allow banks to offer “non-callable deposits”. Currently, retail deposits are callable (depositors have the facility of premature withdrawal).
Why can the facility of premature withdrawal be a bad thing for retail depositors? It would clearly be a good thing if the facility came free. But in a free market, it would be priced. The facility of premature withdrawal is an embedded American-style swaption and a callable deposit is just a non callable deposit bundled with that swaption whether the depositor wants that bundle or not. You pay for the swaption whether you need it or not.
Most depositors would not exercise that swaption optimally for the simple reason that optimal exercise is a difficult optimization problem to solve. Fifteen years ago, Longstaff, Santa-Clara and Schwartz wrote a paper showing that Wall Street firms were losing billions of dollars because they were using over simplified (single factor) models to exercise American-style swaptions (“Throwing away a billion dollars: The cost of suboptimal exercise strategies in the swaptions market.”, Journal of Financial Economics 62.1 (2001): 39-66.). Even those simplified (single factor) models would be far beyond the reach of most retail depositors. It is safe to assume that almost all retail depositors behave suboptimally in exercising their premature withdrawal option.
In a competitive market, the callable deposits would be priced using a behavioural exercise model and not an optimal exercise strategy. Still the problem remains. Some retail depositors would exercise their swaptions better than others. A significant fraction might just ignore the swaption unless they have a liquidity need to withdraw the deposits. These ignorant depositors would subsidize the smarter depositors who exercise it frequently (though still suboptimally). And it makes no sense at all for the regulator to force this bad product on all depositors.
Post global financial crisis, there is a push towards plain vanilla products. The non callable deposit is a plain vanilla product. The current callable version is a toxic/exotic derivative.
Tue, 27 Jan 2015
Last month, Jonas Heese published a paper on “Government Preferences and SEC Enforcement” which purports to show that the US Securities and Exchange Commission (SEC) refrains from taking enforcement action against companies for accounting restatements when such action could cause large job losses particularly in an election year and particularly in politically important states. The results show that:
- The SEC is less likely to take enforcement action against firms that employ relatively more workers (“labour intensive firms”).
- This effect is stronger in a year in which there is a presidential election
- The election year effect in turn is stronger in the politically important states that determine the electoral outcome.
- Enforcement action is also less likely if the labour intensive firm is headquartered in a district of a senior congressman who serves on a committee that oversees the SEC
All the econometrics appear convincing:
- The data includes all enforcement actions pertaining to accounting restatements over a 30 year period from 1982 to 2012: nearly 700 actions against more than 300 firms.
- A comprehensive set of control variables have been used including the F-score which has been used in previous literature to predict accounting restatements.
- A variety of robustness and sensitivity tests have been used to validate the results
But then, I realized that there is one very big problem with the paper – the definition of labour intensity:
I measure LABOR INTENSITY as the ratio of the firm’s total employees (Compustat item: EMP) scaled by current year’s total average assets. If labor represents a relatively large proportion of the factors of production, i.e., labor relative to capital, the firm employs relatively more employees and therefore, I argue, is less likely to be subject to SEC enforcement actions.
Seriously? I mean, does the author seriously believe that politicians would happily attack a $1 billion company with 10,000 employees (because it has a relatively low labour intensity of 10 employees per $1 million of assets), but would be scared of targeting a $10 million company with 1,000 employees (because it has a relatively high labour intensity of 100 employees per $1 million of assets)? Any politician with such a weird electoral calculus is unlikely to survive for long in politics. (But a paper based on this alleged electoral calculus might even get published!)
I now wonder whether the results are all due to data mining. Hundreds of researchers are trying many things: they are choosing different subsets of SEC enforcement actions (say accounting restatements), they are selecting different subsets of companies (say non financial companies) and then they are trying many different ratios (say employees to assets). Most of these studies go nowhere, but a tiny minority produce significant results and they are the ones that we get to read.
Thu, 22 Jan 2015
In high frequency trading, nine minutes is an eternity: it is half a million milliseconds – enough time for five billion quotes to arrive in the hyperactive US equity options market at its peak rate. On a human time scale, nine minutes is enough time to watch two average online content videos.
So what puzzles me about the soaring Swiss franc last week (January 15) is not that it rose so much, nor that it massively overshot its fair level, but that the initial rise took so long. Here is the time line of how the franc moved:
- At 9:30 am GMT, the Swiss National Bank (SNB) announced that it was “discontinuing the minimum exchange rate of CHF 1.20 per euro” that it had set three years earlier. I am taking the time stamp of 9:30 GMT from the “dc-date” field in the RSS feed of the SNB which reads “2015-01-15T10:30:00+01:00” (10:30 am local time which is one hour ahead of GMT).
- The head line “SNB ENDS MINIMUM EXCHANGE RATE” appeared on Bloomberg terminals at 9:30 am GMT itself. Bloomberg presumably runs a super fast version of “if this then that”. (It took Bloomberg nine minutes to produce a human written story about the development, but anybody who needs a human written story to interpret that headline has no business trading currencies).
- At the end of the first minute, the euro had traded down to only 1.15 francs, at the end of the third minute, the euro still traded above 1.10. The next couple of minutes saw a lot of volatility with the euro falling below 1.05 and recovering to 1.15. At the end of minute 09:35, the euro again dropped below 1.05 and started trending down. It was only around 09:39 that it fell below 1.00. It is these nine minutes (half a million milliseconds) that I find puzzling.
- The euro hit its low (0.85 francs) at 09:49, nineteen minutes (1.1 million milliseconds) after the announcement. This overshooting is understandable because the surge in the franc would have triggered many stop loss orders and also knocked many barrier options.
- Between 09:49 and 09:55, the euro recovered from its low and after that it traded between 1.00 and 1.05 francs.
It appears puzzling to me that no human trader was taking out every euro bid in sight at around 9:33 am or so. I find it hard to believe that somebody like a George Soros in his heyday would have taken more than a couple of minutes to conclude that the euro would drop well below 1.00. It would then make sense to simply lift every euro bid above 1.00 and then wait for the point of maximum panic to buy the euros back.
Is it that high frequency trading has displaced so many human traders that there are too few humans left who can trade boldly when the algorithms shut down? Or are we in a post crisis era of mediocrity in the world of finance?
Updated to correct 9:03 to 9:33, change eight billion to five billion and end the penultimate sentence with a question mark.
Tue, 13 Jan 2015
Two months back, I wrote a blog post on how the Basel Committee on Payments and Market Infrastructures was reckless in insisting on a two hour recovery time even from severe cyber attacks.
I think that extending the business continuity resumption time target to a cyber attack is reckless and irresponsible because it ignores Principle 16 which requires an FMI to “safeguard its participants’ assets and minimise the risk of loss on and delay in access to these assets.” In a cyber attack, the primary focus should be on protecting participants’ assets by mitigating the risk of data loss and fraudulent transfer of assets. In the case of a serious cyber attack, this principle would argue for a more cautious approach which would resume operations only after ensuring that the risk of loss of participants’ assets has been dealt with. ... The risk is that payment and settlement systems in their haste to comply with the Basel mandates would ignore security threats that have not been fully neutralized and expose their participants’ assets to unnecessary risk. ... This issue is all the more important for countries like India whose enemies and rivals include some powerful nation states with proven cyber capabilities.
I am glad that last month, the Reserve Bank of India (RBI) addressed this issue in its Financial Stability Report. Of course, as a regulator, the RBI uses far more polite words than a blogger like me, but it raises almost the same concerns (para 3.58):
One of the clauses 31 under PFMIs requires that an FMI operator’s business continuity plans must ‘be designed to ensure that critical information technology (IT) systems can resume operations within two hours following disruptive events’ and that there can be ‘complete settlement’ of transactions ‘by the end of the day of the disruption, even in the case of extreme circumstances’. However, a rush to comply with this requirement may compromise the quality and completeness of the analysis of causes and far-reaching effects of any disruption. Restoring all the critical elements of the system may not be practically feasible in the event of a large-scale ‘cyber attack’ of a serious nature on a country’s financial and other types of information network infrastructures. This may also be in conflict with Principle 16 of PFMIs which requires an FMI to safeguard the assets of its participants and minimise the risk of loss, as in the event of a cyber attack priority may need to be given to avoid loss, theft or fraudulent transfer of data related to financial assets and transactions.
Sat, 03 Jan 2015
I read two papers last week that introduced heterogeneous investors into multi factor asset pricing models. The papers help produce a better understanding of momentum and value but they seem to raise as many questions as they answer. The easier paper is A Tug of War: Overnight Versus Intraday Expected Returns by Dong Lou, Christopher Polk, and Spyros Skouras. They show that:
100% of the abnormal returns on momentum strategies occur overnight; in stark contrast, the average intraday component of momentum profits is economically and statistically insignificant. ... In stark contrast, the profits on size and value ... occur entirely intraday; on average, the overnight components of the profits on these two strategies are economically and statistically insignificant.
The paper also presents some evidence that “is consistent with the notion that institutions tend to trade intraday while individuals are more likely to trade overnight.” In my view, their evidence is suggestive but by no means compelling. The authors also claim that individuals trade with momentum while institutions trade against it. If momentum is not a risk factor but a free lunch, then this would imply that individuals are smart investors.
The NBER working paper (Capital Share Risk and Shareholder Heterogeneity in U.S. Stock Pricing) by Martin Lettau, Sydney C. Ludvigson and Sai Ma presents a more complex story. They claim that rich investors (those in the highest deciles of the wealth distribution) invest disproportionately in value stocks, while those in lower wealth deciles invest more in momentum stocks. They then examine what happens to the two classes of investors when there is a shift in the share of income in the economy going to capital as opposed to labour. Richer investors derive most of their income from capital and an increase in the capital share benefits them. On the other hand, investors from lower deciles of wealth derive most of their income from labour and an increase in the capital share hurts them.
Finally, the authors show very strong empirical evidence that the value factor is positively correlated with the capital share while momentum is negatively correlated. This would produce a risk based explanation of both factors. Value stocks lose money when the capital share is moving against the rich investors who invest in value and therefore these stocks must earn a risk premium. Similarly, momentum stocks lose money when the capital share is moving against the poor investors who invest in momentum and therefore these stocks must also earn a risk premium.
The different portfolio choices of the rich and the poor is plausible but not backed by any firm data. The direction of causality may well be in the opposite direction: Warren Buffet became rich by buying value stocks; he did not invest in value because he was rich.
But the more serious problem with their story is that it implies that both rich and poor investors are irrational in opposite ways. If their story is correct, then the rich must invest in momentum stocks to hedge capital share risk. For the same reason, the poor should invest in value stocks. In an efficient market, investors should not earn a risk premium for stupid portfolio choices. (Even in a world of homogeneous investors, it is well known that a combination of value and momentum has a better risk-return profile than either by itself: see for example, Asness, C. S., Moskowitz, T. J. and Pedersen, L. H. (2013), Value and Momentum Everywhere. The Journal of Finance, 68: 929-985)
Sun, 21 Dec 2014
Yesterday, I blogged about Clifford Chance report on the UK FCA (Financial Conduct Authority) from the viewpoint of regulatory capture. Today, I turn to the issue of the selective pre-briefing provided by the FCA to journalists and industry bodies. Of course, the FCA is not alone in doing this: government agencies around the world indulge in this anachronistic practice.
In the pre internet era, government agencies had to rely on the mass media to disseminate their policies and decisions. It was therefore necessary for them to cultivate the mass media to ensure that their messages got the desired degree of coverage. One of the ways of doing this was to provide privileged access to select journalists in return for enhanced coverage.
This practice is now completely anachronistic. The internet has transformed the entire paradigm of mass communication. In the old days, we had a push channel in which the big media outlets pushed their content out to consumers. The internet is a pull channel in which consumers pull whatever content they want. For example, I subscribe to the RSS/Atom feeds of several regulators around the world. I also subscribe to the feeds of several blogs which comment on regulatory developments world wide. My feed reader pulls all this content to my computer and mobile devices and provides me instant excess to these messages without the intermediation of any big media gatekeepers.
In this context, the entire practice of pre-briefing is anachronistic. Worse, it is inimical to the modern democratic ideals of equal and fair access to all. The question then is why does it survive at all. I am convinced that what might have had some legitimate function decades ago has now been corrupted into something more nefarious. Regulators now use privileged access to suborn the mass media and to get favourable coverage of their decisions. Journalists have to think twice before they write something critical about the regulator who may simply cut off their privileged access.
It is high time we put an end to this diabolical practice. What I would like to see is the following:
- A regulator could meet a journalist one-on-one, but the entire transcript of the interview must then be published on the regulator’s website and the interview must be embargoed until such publication.
- A regulator could hold press conferences or grant live interviews to the visual media, but such events must be web cast live on the regulator’s website and transcripts must be published soon after.
- The regulators should not differentiate between (a) journalists from the mainstream media and (b) representatives of alternate media (including bloggers).
- Regulator web sites and feeds must be more friendly to the general public. For example, the item description field in an RSS feed or the item content field in an Atom feed should contain enough information for a casual reader to decide whether it is worth reading in full. Regulatory announcements must provide enough background to enable the general public to understand them.
Any breach of (1) or (2) above should be regarded as a selective disclosure that attracts the same penalties as selective disclosure by an officer of a listed company.
What I also find very disturbing is the practice of the regulator holding briefing sessions with select group of regulated entities or their associations or lobby groups. In my view, while the regulator does need to hold confidential discussions with regulated entities on a one-on-one basis, any meeting attended by more than one entity cannot by definition be about confidential supervisory concerns. The requirement of publication of transcripts or live web casts should apply in these cases as well. In the FCA case, it seems to be taken for granted by all (including the Clifford Chance report) that the FCA needs to have confidential discussions with the Association of British Insurers (ABI). I think this view is mistaken, particularly when it is not considered necessary to hold a similar discussion with the affected policy holders.
Sat, 20 Dec 2014
I just finished reading the 226 page report that the non independent directors of the UK FCA (Financial Conduct Authority) commissioned from the law firm Clifford Chance on the FCA’s botched communications regarding its proposed review of how insurance companies treat customers trapped in legacy pension plans. The report published earlier this month deals with the selective disclosure of market moving price sensitive information by the FCA itself to one journalist, and the failure of the FCA to issue corrective statements in a timely manner after large price movements in the affected insurance companies on March 28, 2014.
I will have a separate blog post on this whole issue of selective disclosure to journalists and to industry lobby groups. But in this post, I want to write about what I think is the bigger issue in the whole episode: what appears to me to be a regulatory capture of the Board of the FCA and of HM Treasury. It appears to me that the commissioning of the Clifford Chance review serves to divert attention from this vital issue and allows the regulatory capture to pass unnoticed.
The rest of this blog post is based on reading between the lines in the Clifford Chance report and is thus largely speculative. The evidence of regulatory capture is quite stark, but most of the rest of the picture that I present could be totally wrong.
The sense that I get is that there were two schools of thought within the FCA. One group of people thought that the FCA needed to do something about the 30 million policy holders who were trapped in exploitative pension plans that they could not exit because of huge exit fees. Since the plans were contracted prior to 2000 (in some cases they dated back to the 1970s), they did not enjoy the consumer protections of the current regulatory regime. This group within the FCA wanted to use the regulator’s powers to prevent these policy holders from being treated unfairly. The simplest solution of course was to abolish the exit fees, and let these 30 million policy holders choose new policies.
The other group within the FCA wanted to conduct a cosmetic review so that the FCA would be seen to be doing something, but did not want to do anything that would really hurt the insurance companies who made tons of money off these bad policies. Much of the confusion and lack of coordination between different officials of the FCA brought out in the Clifford Chance report appears to me to be only a manifestation of the tension between these two views within the FCA. It was critical for the second group’s strategy to work that the cosmetic review receive wide publicity that would fool the public into thinking that something was being done. Hence the idea of doing a selective pre-briefing to a journalist known to be sympathetic to the plight of the poor policy holders. The telephonic briefing with this journalist was not recorded, and was probably ambiguous enough to maintain plausible deniability.
The journalist drew the reasonable inference that the first group in the FCA had won and that the FCA was serious about giving a fair deal to the legacy policy holders and reported accordingly. What was intended to fool only the general public ended up fooling the investors as well, and the stock prices of the affected insurance companies crashed after the news report came out. The big insurance companies were now scared that the review might be a serious affair after all and pulled out all their resources to protect their profits. They reached out to the highest levels of the FCA and HM Treasury and ensured that their voice was heard. Regulatory capture is evident in the way in which the FCA abandoned even the pretence of serious action, and became content with cosmetic measures. Before the end of the day, a corrective statement came out of the FCA which made all the right noises about fairness, but made it clear that exit fees would not be touched.
The journalist in question (Dan Hyde of the Telegraph) nailed this contradiction in an email quoted in the Clifford Chance report (para 16.8)
But might I suggest that by any standard an exit fee that prevents a customer from getting a fairer deal later in life is in itself an unfair term on a policy.
On March 28, 2014, the top brass of the FCA and HM Treasury could see the billions of pounds wiped out on the stock exchange from the market value of the insurance companies, and they could of course hear the complaints from the chairmen of those powerful insurance companies. There was no stock exchange showing the corresponding improvement in the net worth of millions of policy holders savouring the prospect of escape from unfair policies, and their voice was not being heard at all. Out of sight, out of mind.
Sat, 13 Dec 2014
Two days back, the Securities and Exchange Board of India (SEBI) issued a public Caution to Investors about entities that make false promises and assure high returns. This is quite sensible and also well intentioned. But the first paragraph of the press release is completely wrong in asking investors to focus on whether the investment is being offered by a regulated or by an unregulated entity:
It has come to the notice of Securities and Exchange Board of India (SEBI) that certain companies / entities unauthorisedly, without obtaining registration and illegally are collecting / mobilising money from the general investors by making false promises, assuring high return, etc. Investors are advised to be careful if the returns offered by the person/ entity is very much higher than the return offered by the regulated entities like banks, deposits accepted by Companies, registered NBFCs, mutual funds etc.
This is all wrong because the most important red flag is the very high return itself, and not the absence of registration and regulation. That is the key lesson from the Efficient Markets Hypothesis:
If something appears too good to be true, it is not true.
For the purposes of this proposition, it does not matter whether the entity is regulated. To take just one example, Bernard L. Madoff Investment Securities LLC was regulated by the US SEC as a broker dealer and as an investment advisor. Fairfield Greenwich Advisors LLC (through whose Sentry Fund, many investors invested in Madoff’s Ponzi scheme) was also an SEC regulated investment advisor.
Regulated entities are always very keen to advertise their regulated status as a sign of safety and soundness. (Most financial entities usually prefer light touch regulation to no regulation at all.) But regulators are usually at pains to avoid giving the impression that regulation amounts to a seal of approval. For example, every public issue prospectus in India contains the disclaimer:
The Equity Shares offered in the Issue have not been recommended or approved by the Securities and Exchange Board of India
In this week’s press release however, SEBI seems to have inadvertently lowered its guard, and has come dangerously close to implying that regulation is a seal of approval and respectability. Many investors would misinterpret the press release as saying that it is quite safe to put money in a bank deposit or in a mutual fund. No, that is not true at all: the bank could fail, and market risks could produce large losses in a mutual fund.
Mon, 08 Dec 2014
I made an advance tax payment online today and it struck me that the bank never asks for two factor authentication for advance tax payments. It seems scandalous to me that payments of several hundreds of thousands of rupees are allowed without two factor authentication at a time when the online taxi companies are not allowed to bypass two factor authentication for payments of a few hundred rupees.
I can think of a couple of arguments why advance tax is different, but none are convincing:
- The advance tax will be refunded if it is excessive. This argument fails because the refund could take a year if one is talking about the first instalment of advance tax. Moreover, the taxi companies will also promise to make a refund (and much faster than a year).
- The hacker would gain nothing financially out of making an advance tax payment. This argument forgets the fact that a lot of hacking is of the “denial of service” kind. A businessman could hire a hacker to drain money out of his rival’s bank account and prevent the rival from bidding in an auction. That would give a clear financial benefit from hacking.
The point is that the rule of law demands that the same requirements apply to one and all. The “King can do no wrong” argument is inconsistent with the rule of law in a modern democracy. I believe that all payments above some threshold should require two factor authentication.
Thu, 04 Dec 2014
No that is not a typo; I am asserting the opposite of the conventional wisdom that foreign portfolio investment is fickle while foreign direct investment is more reliable. The conventional wisdom was on display today in news reports about the parliament’s apparent willingness to allow foreign direct investment in the insurance sector, but not foreign portfolio investment.
The conventional wisdom is propagated by macroeconomists who look at the volatility of aggregate capital flows – it is abundantly clear that portfolio flows stop and reverse during crisis periods (“sudden stops”) while FDI flows are more stable. Things look very different at the enterprise level, but economists working in microeconomics and corporate finance who can see a different world often do not bother to discuss policy issues.
Let me therefore give an example from the Indian banking industry to illustrate what I mean. In the late 1990s, after the Asian Crisis, one of the largest banks in the world decided that Asia was a dangerous place to do banking and sold a significant part of their banking operations in India and went home. That is what I mean by fickle FDI. At the same time, foreign portfolio investors were providing tons of patient capital to Indian private banks like HDFC, ICICI and Axis to grow their business in India. In the mid 1990s, many people thought that liberalization would allow foreign banks to thrive; in reality, they lost market share (partly due to the fickleness and short termism of their parents), and it is the Indian banks funded by patient foreign portfolio capital that gained a large market share.
In 2007, as the Great Moderation was about to end, but markets were still booming, ICICI Bank tapped the markets to raise $5 billion of equity capital (mainly from foreign portfolio investors) in accordance with the old adage of raising equity when it is available and not when it is needed. The bank therefore entered the global financial crisis with a large buffer of capital originally intended to finance its growth a couple of years ahead. During the crisis, even this buffer was perceived to be inadequate and the bank needed to downsize the balance sheet to ensure its survival. But without that capital buffer raised in good times, its position would have been a lot worse; it might even have needed a government bailout.
Now imagine that instead of being funded by portfolio capital, ICICI had been owned by say Citi. Foreign parents do not like to fund their subsidiaries ahead of need; they prefer to drip feed the subsidiary with capital as and when needed. In fact, if the need is temporary, the parent usually provides a loan instead of equity so that it can be called back when it is no longer needed. So the Indian subsidiary would have entered the crisis without that large capital buffer. During the crisis, the ability of the embattled parent to provide a large capital injection into its Indian operations would have been highly questionable. Very likely, the Indian subsidiary would have ended up as a ward of the state.
Macro patterns hide these interesting micro realities. The conventional wisdom ignores the fact that enterprise level risk management works to counter the vagaries of the external funding environment. It ignores the standard insight from the markets versus hierarchies literature that a funding that relies on a large number of alternate providers of capital is far more resilient than one that relies on just one provider of capital. In short it is time to overturn the conventional wisdom.
Thu, 27 Nov 2014
I had an extended email conversation last month with a respected economist (who wishes to remain anonymous) about whether governments of oil importing countries should hedge oil price. While there is a decent literature on oil price hedging by oil exporters (for example, this IMF Working Paper of 2001), there does not seem to be much on oil importers. So we ended up more or less debating this from first principles. The conversation helped clarify my thinking, and this blog post summarizes my current views on this issue.
I think that hedging oil price risk does not make much sense for the government of an oil importer for several reasons:
- Oil imports are usually not a very large fraction of GDP; by contrast oil exports are often a major chunk of GDP for a large exporter. For most countries, oil price risk is just one among many different macroeconomic shocks that can hit the country. Just as for a company, equity capital is the best hedge against general business risks, for a country, external reserves and fiscal capacity are the best hedges against general macroeconomic shocks.
- For a country, the really important strategic risk relating to oil is a supply disruption (embargo for example) and this can be hedged only with physical stocks (like the US strategic oil reserve).
- A country is an amorphous entity. Probably, it is the government that will do the hedge, and private players that would consume the oil. Who pays for the hedge and who benefits from it? Does the government want the private players to get the correct price signal? Does it want to subsidize the private sector? If it is the private players who are consuming oil, why don’t we let them hedge the risk themselves?
- Futures markets may not provide sufficient depth, flexibility and liquidity to absorb a large importer’s hedging needs. The total open interest in ICE Brent futures is roughly equal to India’s annual crude import.
Frankly, I think it makes sense for the government to hedge oil price risk only if it is running an administered price regime. In this case, we can analyse its hedging like a corporate hedging program. The administered price regime makes the government short oil (it is contracted to sell oil to the private sector at the administered price), and then it makes sense to hedge the fiscal cost by buying oil futures to offset its short position.
But an administered price regime is not a good idea. Even if, for the moment, one accepts the dubious proposition that rapid industrialization requires strategic under pricing of key inputs (labour, capital or energy), we only get an argument for energy price subsidies not for energy price stabilization. The political pressure for short term price stabilization comes from the presence of a large number of vocal consumers (think single truck owners for example) who have large exposures to crude price risk but do not have access to hedging markets. If we accept that the elasticity of demand for crude is near zero in the short term (though it may be pretty high in the long term), then unhedged entities with large crude exposures will find it difficult to tide through the short term during which they cannot reduce demand. They can be expected to be very vocal about their difficulties. The solution is to make futures markets more accessible to small and mid size companies, unincorporated businesses and even self employed individuals who need such hedges. This is what India has done by opening up futures markets to all including individuals. Most individuals might not need these markets (financial savings are the best hedge against most risks for individuals who are not in business). But it is easier to open up the markets to all than to impose complex documentation requirements that restrict access. Easy hedging eliminates the political need for administered energy prices.
With free energy pricing in place, the most sensible hedge for governments is a huge stack of foreign exchange reserves and a large pool of oil under the ground in a strategic reserve.
Mon, 24 Nov 2014
The socializing finance blog points to a PNAS paper showing that ethnic diversity drastically reduces the incidence of price bubbles in experimental markets. This is a conclusion that I am inclined to believe on theoretical grounds and the paper itself presents the theoretical arguments very persuasively. However, the experimental evidence leaves me unimpressed.
The biggest problem is that in both the locales (Southeast Asia and North America) in which they carried out the experiments:
In the homogeneous markets, all participants were drawn from the dominant ethnicity in the locale; in the diverse markets, at least one of the participants was an ethnic minority.
This means that the experimental design conflates the presence of ethnic diversity with that of ethnic minorities. This is all the more important because for the experiments, they recruited skilled participants, trained in business or finance. There could therefore be a significant self selection bias here in that ethnic minority members who chose to train in business or finance might have been those with exceptional talent or aptitude.
This fear is further aggravated by the result in Figure 2 showing that the Southeast Asian markets performed far better than the North American markets. In fact, the homogeneous Southeast Asian markets did better than the diverse North American markets! The diverse Southeast Asian market demonstrated near perfect pricing accuracy. This suggests that the ethnic fixed effects (particularly the gap between the dominant North American ethnic group and the minority Southeast Asian ethnic group) are very large. A proper experimental design would have had homogeneous markets made out of minority ethnic members as well so that the ethnic fixed effects could be estimated and removed.
Another reason that I am not persuaded by the experimental evidence that the experimental design prevented participants from seeing each other or communicating directly while trading. As the authors state “So, direct social influence was curtailed, but herding was possible.” With a major channel of diversity induced improvement blocked off by the design itself, one’s prior of the size of the diversity effect is lower than it would otherwise be.
Mon, 17 Nov 2014
The Basel Committee on Payments and Market Infrastructures (CPMI, previously known as CPSS) has issued a document about Cyber resilience in financial market infrastructures insisting that payment and settlement systems should be able to resume operations within 2 hours from a cyber attack and should be able to complete the settlement by end of day. The Committee is treating a cyber attack as a business continuity issue and is applying Principle 17 of its Principles for financial market infrastructures. Key Consideration 6 of Principle 17 requires that the business continuity plan “should be designed to ensure that critical information technology (IT) systems can resume operations within two hours following disruptive events” and that the plan “should be designed to enable the FMI to complete settlement by the end of the day of the disruption, even in the case of extreme circumstances”.
I think that extending the business continuity resumption time target to a cyber attack is reckless and irresponsible because it ignores Principle 16 which requires an FMI to “safeguard its participants’ assets and minimise the risk of loss on and delay in access to these assets.” In a cyber attack, the primary focus should be on protecting participants’ assets by mitigating the risk of data loss and fraudulent transfer of assets. In the case of a serious cyber attack, this principle would argue for a more cautious approach which would resume operations only after ensuring that the risk of loss of participants’ assets has been dealt with.
I believe that if there were to be a successful cyber attack against a well run payment and settlement system, the attack would most likely be carried out by a nation-state. Such an attack would therefore be backed by resources and expertise far exceeding what any payment and settlement system would possess. Neutralizing such a threat would require assistance from the national security agencies of its own nation. It is silly to assume that such a cyber war between two nation states would be resolved within two hours just because a Committee in Basel mandates so.
The risk is that payment and settlement systems in their haste to comply with the Basel mandates would ignore security threats that have not been fully neutralized and expose their participants’ assets to unnecessary risk. I think the CPMI is being reckless and irresponsible in encouraging such behaviour.
This issue is all the more important for countries like India whose enemies and rivals include some powerful nation states with proven cyber capabilities. I think that Indian regulators should tell their payment and settlement systems that Principle 16 prevails over Principle 17 in the case of any conflict between the two principles. With this clarification, the CPMI guidance on cyber attacks would be effectively defanged.
Sun, 09 Nov 2014
The UK seems to be going in the opposite direction to the US in terms of providing liquidity support to clearing corporations or central counterparties (CCPs). In the US, the amendments by the Dodd Frank Act made it extremely difficult for the central bank to provide liquidity assistance to any non bank. On the other hand, the Bank of England on Wednesday extended its discount window not only to all CCPs but also to systemically important broker-dealers (h/t OTC Space). The Bank of England interprets its liquidity provision function very widely:
As the supplier of the economy’s most liquid asset, central bank money, the Bank is able to be a ‘back-stop’ provider of liquidity, and can therefore provide liquidity insurance to the financial system.
My own view has always been that CCPs should have access to the discount window but only to borrow against the best quality paper (typically, government bonds). If there is a large short fall in the pay-in, a CCP has to mobilize liquidity in probably less than an hour (before pay-out) and the only entity able to provide large amounts of liquidity at such short notice is the central bank. But if a CCP does not have enough top quality collateral on hand, it should be allowed to fail. A quarter century ago, Ben Bernanke argued that it makes sense for the central bank to stand behind even a failing CCP (Ben S. Bernanke, “Clearing and Settlement during the Crash”, The Review of Financial Studies, Vol. 3, No. 1, pp. 133-151). But I would not go that far. Most jurisdictions today are designing resolution mechanisms to deal with failed CCPs, so this should work even in a crisis situation.
Sat, 01 Nov 2014
If you are trying to sell $200 million of nearly flawless counterfeit $20 currency notes, there is only one real buyer – the US government itself. That seems to be the moral of a story in GQ Magazine about Frank Bourassa.
The story is based largely on Bourassa’s version of events and is possibly distorted in many details. However, the story makes it pretty clear that the main challenge in counterfeiting is not in the manufacture, but in the distribution. Yes, there is a minimum scale in the production process – Bourassa claims that a high end printing press costing only $300,000 was able to achieve high quality fakes. The challenge that he faced was in buying the correct quality of paper. The story does not say why he did not think of vertically integration by buying a mini paper mill, but I guess that is because it is difficult to operate a paper mill secretly unlike the printing press which can be run in a garage without anybody knowing about it. Bourassa was able to proceed because some paper mill somewhere in the world was willing to sell him the paper that he needed.
The whole point of anti counterfeiting technology is to increase the fixed cost of producing a note without increasing the variable cost too much. So high quality counterfeiting is not viable unless it is done in scale. But the distribution of fake notes suffers from huge diseconomies of scale – while it is pretty easy to pass off a few fake notes (especially small denomination notes), Bourassa found that it was difficult to sell large number of notes at even 70% discount to face value. He ended up selling his stockpile to the US government itself. The price was his own freedom.
To prevent counterfeiting, the government needs to ensure that at every possible scale of operations, the combined cost of production and distribution exceeds the face value of the note. At low scale, the high fixed production cost makes counterfeiting uneconomical, while at large scale, the high distribution cost is the counterfeiter’s undoing. That is why the only truly successful counterfeiters have been other sovereigns who have two decisive advantages: first for them the fixed costs are actually sunk costs, and second, they have access to distribution networks that ordinary counterfeiters cannot dream of.
Wed, 29 Oct 2014
A few days back, the IMF made a change in its rule for setting interest rates on SDRs (Special Drawing Rights) and set a floor of 5 basis points (0.05%) on this rate. The usual zero lower bound on interest rates does not apply to the SDR as there are no SDR currency notes floating around. The SDR is only a unit of account and to some extent a book entry currency. There is no technical problem with setting the interest rate on the SDR to a substantially negative number like -20%.
In finance theory, there is no conceptual problem with a large negative interest rate. Though we often describe the interest rate (r) as a price, actually it is 1+r and not r itself that is a price. The price of one unit of money a year later in terms of money today is 1+r. Prices have to be non negative, but this only requires that r can not drop below -100%. With bearer currency in circulation, a zero lower bound (ZLB) comes about because savers have the choice of saving in the form of currency and earning a zero interest rate. Actually the return on cash is slightly negative (probably close to -0.5%) because of storage (and insurance) costs. As such, the ZLB is actually not at zero, but at somewhere between -0.25% and -0.50%.
It has long been understood that a book entry (or mere unit of account) currency like the SDR is not subject to the ZLB at all. Buiter for example proposed the use of a parallel electronic currency as a way around the ZLB.
In this context, it is unfortunate that the IMF has succumbed to the fetishism of positive interest rates. At the very least, it has surrendered its potential for thought leadership. At worst, the IMF has shown that it is run by creditor nations seeking to earn a positive return on their savings when the fundamentals do not justify such a return.
Thu, 23 Oct 2014
ICE Benchmark Administration (IBA), the new administrator of Libor has published a position paper on the future evolution of Libor. The core of the paper is a shift to “a more transaction-based approach for determining LIBOR submissions” and a “more prescriptive calculation methodology”. In this post, I discuss the following IBA proposals regarding interpolation and extrapolation:
Interpolation and extrapolation techniques are currently used where appropriate by benchmark submitters according to formulas they have adopted individually.
We propose that inter/extrapolation should be used:
- When a benchmark submitter has no available transactions on which to base its submission for a particular tenor but it does have transaction-derived anchor points for other tenors of that currency, and
- If the submitter’s aggregate volume of eligible transactions is less than a minimum level specified by IBA.
To ensure consistency, IBA will issue interpolation formula guidelines
In my view, it does not make sense for the submitter to perform interpolations in situations that are sufficiently standardized for the administrator to provide interpolation formulas. It is econometrically much more efficient for the administrator to perform the interpolation. For example, the administrator can compute a weighted average with lower weights on interpolated submission – ideally the weights would be a declining function of the width of the interpolation interval. Thus where many non interpolated submissions are available, the data from other tenors would be virtually ignored (because of low weights). But where there are no non-interpolated submissions, the data from other tenors would drive the computed value. The administrator can also use non linear (spline) interpolation across the full range of tenors. If submitters are allowed to interpolate, perverse outcomes are possible. For example, where the yield curve has a strong curvature but only a few submitters provide data on the correct tenor, these will differ sharply from the incorrect (interpolated) submissions of the majority of the submitters. The standard procedure of ignoring extreme submissions would discard all the correct data and average all the incorrect submissions!
Many people tend to forget that even the computation of an average is an econometric problem that can benefit from the full panoply of econometric techniques. For example, an econometrician might suggest interpolating across submission dates using a Kalman filter. Similarly, covered interest parity considerations would suggest that submissions for Libor in other currencies should be allowed to influence the estimation of Libor in each currency (simultaneous equation rather than single equation estimation). So long as the entire estimation process is defined in open source computer code, I do not see why Libor estimates should not be based on a complex econometric procedure – a Bayesian Vector Auto Regression (VAR) with Garch errors for example.
Mon, 20 Oct 2014
For quite some time now, I have been concerned that the SIM card in the mobile phone is becoming the most vulnerable single point of failure in online security. The threat model that I worry about is that somebody steals your mobile, transfers the SIM card to another phone, and goes about quickly resetting the passwords to your email accounts and other sites where you have provided your mobile number as your recovery option. Using these email accounts, the thief then proceeds to reset passwords on various other accounts. This threat model cannot be blocked by having a strong PIN or pattern lock on the phone or by remotely wiping the device. That is because, the thief is using your SIM and not your phone.
If the thief knows enough of your personal details (name, data of birth and other identifying information), then with a little bit of social engineering, he could do a lot of damage during the couple of hours that it would take to block the SIM card. Remember that during this period, he can send text messages and Whatsapp messages in your name to facilitate his social engineering. The security issues are made worse by the fact that telecom companies simply do not have the incentives and expertise to perform the authentication that financial entities would do. There have been reports of smart thieves getting duplicate SIM cards issued on the basis of fake police reports and forged identity documents (see my blog post of three years ago).
Modern mobile phones are more secure than the SIM cards that we put inside them. They can be secured not only with PIN and pattern locks but also fingerprint scanner and face recognition software. Moreover, they support encryption and remote wiping. It is true that SIM cards can be locked with a PIN which has to be entered whenever the phone is switched off and on or the SIM is put into a different mobile. But I am not sure how useful this would be if telecom companies are not very careful while providing the PUK code which allows the PIN to be reset.
If we assume that the modern mobile phone can be made reasonable secure, then it should be possible to make SIM cards more secure without the inconvenience of entering a SIM card PIN. In the computer world, for example, it is pretty common (in fact recommended) to do remote (SSH) login using only authentication keys without any user entered passwords. This works with a pair of encryption keys – the public key sits in the target machine and the private key in the source machine. A similar system should be possible with SIM cards as well, with the private key sitting on the mobile and backed up on other devices. Moving the SIM to another phone would not work unless the thief can also transfer the private key. Moreover, you would be required to use the backed up private key to make a request for a SIM replacement. This would keep SIM security completely in your hands and not in the hands of a telecom company that has no incentive to protect your SIM.
This system could be too complex for many users who use a phone only for voice and non critical communications. It could therefore be an opt-in system for those who use online banking and other services a lot and require higher degree of security. Financial services firms should also insist on the higher degree of security for high value transactions.
I am convinced that encryption is our best friend: it protects us against thieves who are adept at social engineering, against greedy corporations who are too careless about our security, and against overreaching governments. The only thing that you are counting on is that hopefully P ≠ NP.
Mon, 13 Oct 2014
Much has been written since the Global Financial Crisis about how modern banking system has become less and less about financing productive investments and more and more about shuffling pieces of paper in speculative trading. Last month, Jordà, Schularick and Taylor wrote an NBER Working Paper “The Great Mortgaging: Housing Finance, Crises, and Business Cycles” describing an even more fundamental change in banking during the 20th century. They construct a database of bank credit in advanced economies from 1870 to 2011 and document “an explosion of mortgage lending to households in the last quarter of the 20th century”. They conclude that:
To a large extent the core business model of banks in advanced economies today resembles that of real estate funds: banks are borrowing (short) from the public and capital markets to invest (long) into assets linked to real estate.
Of course, it can be argued that mortgage lending is an economically useful activity to the extent that it allows people early in their career to buy houses. But it is also possible that much of this lending only boosts house prices and does not improve the affordability of houses to any significant extent.
The more important question is why banks have become less important in lending to businesses. One possible answer that in this traditional function, they have been disintermediated by capital markets. On the mortgages side, however, perhaps, banks are dominant only because they with their Too-Big-To-Fail (TBTF) subsidies can afford to take the tail risks that capital markets refuse to take.
I think the Jordà, Schularick and Taylor paper raises the fundamental question of whether advanced economies need banks at all. If regulators impose the kind of massive capital requirements that Admati and her coauthors have been advocating, and banks were forced to contract, capital markets might well step in to fill the void in the advanced economies. The situation might well be different in emerging economies.
Sun, 28 Sep 2014
The CME futures contracts on the S&P 500 index comes in two flavours – the big or full-size (SP) contract is five times the E-Mini (ES) contract. For clearing purposes, SP and ES contracts are fungible with a five to one ratio. The daily settlement price of both contracts is obtained by taking a volume weighted average price of both contracts taken together weighted in the same ratio.
Yet, according to a recent SEC order against Latour Trading LLC and Nicolas Niquet, a broker-dealer is required to maintain a net-capital on the two contracts separately. In Para 28 of its order, the SEC says that in February 2010, Latour held 333,251 long ES contracts and 66,421 short SP contracts, and it netted these out to a long position of 1,146 ES contracts requiring a net capital of $14,325. According to the SEC, these should not have been netted out and Latour should have held a net capital of $8.32 million ($4.17 million for the ES and $4.15 million for the SP). This is surely absurd.
It is not as if the SEC does not allow netting anywhere. It allows index products to be offset by qualified stock baskets (para 10). In other words, an approximate hedge (index versus an approximate basket) can be netted but an exact hedge (ES versus SP) cannot be netted.
PS: I am not defending Latour at all. The rest of the order makes clear that there was a great deal of incompetence and deliberate under-estimation of net capital going on. It is only on the ES/SP netting claim that I think the SEC regulations are unreasonable.
Mon, 22 Sep 2014
It is well known that financial repression more or less disappeared in advanced economies during the 1980s and 1990s, but has been making a comeback recently. Is it possible that financial repression did not actually disappear, but was simply outsourced to China? And the comeback that we are seeing after the Global Financial Crisis is simply a case of insourcing the repression back?
This thought occurred to me after reading an IMF Working Paper on “Sovereign Debt Composition in Advanced Economies: A Historical Perspective”. What this paper shows is that many of the nice things that happened to sovereign debt in advanced economies prior to the Global Financial Crisis was facilitated by the robust demand for this debt by foreign central banks. In fact, the authors refer to this period not as the Great Moderation, but as the Great Accumulation. Though they do not mention China specifically, it is clear that the Great Accumulation is driven to a great extent by China. It is also clear that much of the Chinese reserve accumulation is made possible by the enormous financial repression within that country.
This leads me to my hypothesis that just as the advanced economies outsourced their manufacturing to more efficient manufacturers in China, they outsourced their financial repression to the most efficient manufacturer of financial repression – China. Now that China is becoming a less efficient and less willing provider of financial repression, advanced economies are insourcing this job back to their own central banks.
In this view of things, we overestimated the global reduction of financial repression in the 1990s and are overestimating the rise in financial repression since the crisis.
Sat, 13 Sep 2014
A year ago, my colleagues, Prof. Sobhesh K. Agarwalla, Prof. Joshy Jacob and I created a publicly available data library providing the Fama-French and momentum factor returns for the Indian equity market, and promised to keep updating the data on a regular basis. It has taken a while to deliver on that promise, but we have now updated the data library. More importantly, we believe that we have now set up a process to do this on a sustainable basis by working together with the Centre for Monitoring Indian Economy (CMIE) who were the source of the data anyway. CMIE agreed to implement our algorithms on their servers and give us the data files every month. That ensures more comprehensive coverage of the data and faster updates.
Sun, 07 Sep 2014
Andrew Verstein has an interesting paper on the Law and Economics of Benchmark Manipulation. One of the gems in that paper is the title of this blog post: “A benchmark is to price what a credit rating agency is to quality.” Verstein is saying that just as credit rating agencies became destructive when their ratings were hardwired into various legal requirements, benchmarks also become dangerous when they are hardwired into various legal documents.
Just as in the case of rating agencies, in the case of price benchmarks also, regulators have encouraged reliance on benchmarks. Even in the equity world where exchange trading eliminates the need for many kinds of benchmarks, the closing price is an important benchmark which derives its importance mainly from its regulatory use. Verstein points out that “Indeed, it is hard to find an example of stock price manipulation that does not target the closing (or opening) price.” So we have taken a liquid and transparent market and conjured an opaque and vulnerable benchmark out of it. Regulators surely take some of the blame for this unfortunate outcome.
Another of Verstein’s points is that governments use benchmarks even when they know that it is broken: “the United States Treasury used Libor to make TARP loans during the financial crisis, despite being on notice that Libor was a manipulated benchmark.” In this case, Libor was not only manipulated but had become completely dysfunctional – I remember that the popular definition of Libor at that time was that it was the rate at which banks do not lend to each other in London. That was well before Libor became Lie-bor. The US government could easily have taken a reference rate from the US Treasury market or repo markets and then set a fat enough spread over that reference rate (say 1000 basis points) to cover the TED spread, the CDS spread, and a Bagehotian penal spread. By choosing not to do so they lent legitimacy to what they knew very well was an illegitimate benchmark.
Tue, 02 Sep 2014
Yesterday, the Securities and Exchange Board of India (SEBI) issued regulations requiring all Research Analysts to be registered with SEBI. The problem is that the regulations use a very expansive definition of research analyst. This reminds me of my note of dissent to the report of the Financial Sector Legislative Reforms Commission (FSLRC) on the issue of definition of financial service. I wrote in that dissent that:
Many activities carried out by accountants, lawyers, actuaries, academics and other professionals as part of their normal profession could attract the regitration requirement because these activities could be construed as provision of a financial service ... All this creates scope for needless harassment of innocent people without providing any worthwile benefits.
Much the same could be said about the definition of the definition of research analyst. Consider for example this blog post by Prof. Aswath Damodaran of the Stern School of Business at New York University on the valuation of Twitter during its IPO. It clearly meets the definition of a research report in Regulation 2(w):
any written or electronic communication that includes research analysis or research recommendation or an opinion concerning securities or public offer, providing a basis for investment decision
Regulation 2(w) has a long list of exclusions, but Damodaran’s post does not fall under any of them. Therefore, clearly Damodaran would be a research analyst under Regulation 2(u) under several of its prongs:
a person who is primarily responsible for:
- preparation or publication of the content of the research report; or
- providing research report; or
- offering an opinion concerning public offer,
with respect to securities that are listed or to be listed in a stock exchange
Under Regulation 3(1), Prof. Damodaran would need a certificate of registration from SEBI if he were to write a similar blog post about an Indian company. Or, under Regulation 4, he would have to tie up with a research entity registered in India.
Regulations of this kind are a form of regulatory overreach that must be prevented by narrowly circumscribing the powers of regulators in the statute itself. To quote another sentence that I wrote in the FSLRC dissent note: “regulatory self restraint ... is often a scarce commodity”.
Sat, 30 Aug 2014
A couple of weeks ago, Matt Levine at Bloomberg View described a curious incident of a company that was a public company for only six days before cancelling its public issue:
- On July 30, 2014, an Israeli company, Vascular Biogenics Ltd. (VBL) announced that it had priced its initial public offering (IPO) at $12 per share and that the shares would begin trading on Nasdaq the next day. The registration statement relating to these securities was filed with and was declared effective by the US Securities and Exchange Commission (SEC) on the same day.
- On August 8, VBL announced that it had cancelled its IPO.
What happened in between was that on July 31, the shares opened at $11.00 and sank further to close at $10.25 (a 15% discount to the IPO price) on a large volume of 1.5 million shares as compared to the total issue size of 5.4 million shares excluding the Greeshoe option (Source for price and volume data is Google Finance). This price drop was bad news for one of the large shareholders who had agreed to purchase almost 45% of the shares in the IPO. This insider was unwilling or unable to pay for the shares that he had agreed to buy. Technically, the underwriters were on the hook now, and the default could have triggered a spate of law suits. Instead, the company cancelled the IPO and the underwriting agreement. Nasdaq instituted a trading halt but the company appears to be still technically listed on Nasdaq.
Matt Levine does a fabulous job of dissecting the underwriting agreement to understand the legal issues involved. I am however more concerned about the relationship between the insider and the company. The VBL episode seems to suggest that if you are an insider in a company, a US IPO is a free call option. If the stock price goes up on listing, the insider pays the IPO price and buys the stock. If the price goes down, the insider refuses to pay and the company cancels the IPO.
Sat, 23 Aug 2014
Last month, the US Securities and Exchange Commission (SEC) adopted rules allowing money market funds (MMFs) to restrict (or “gate”) redemptions when there is a liquidity problem. These proposals have been severely criticized on the ground that they could lead to pre-emptive runs as investor rush to the exit before the gates are imposed.
I think the criticism is valid though I was among those who recommended the imposition of gates in Indian mutual funds during the crisis of 2008. The difference is that I see gates as a solution not to a liquidity problem, but to a valuation problem. The purpose of the gate in my view is to protect remaining investors from the risk that redeeming investors exit the fund at a valuation greater than the true value of the assets. An even better solution to this valuation problem is the minimum balance at risk proposal that I blogged about two years ago.
Sat, 16 Aug 2014
Tarek Hassan and Rui Mano have an interesting NBER conference paper (h/t Econbrowser (Menzie Chinn) that comes pretty close to saying that there is really no forward premium puzzle at all. Their paper itself tends to obscure the message using phrases like cross-currency, between-time-and-currency, and cross-time components of uncovered interest parity violations. So what follows is my take on their paper.
Uncovered interest parity says that ignoring risk aversion, currencies with high interest rates should be expected to depreciate so as to neutralise the interest differential. If not risk neutral investors from the rest of the world would move all their money into the high yielding currency and earn higher returns. Similarly, currencies with low interest rates should be expected to appreciate to compensate the interest differential so that risk neutral investors do not stampede out of the currency.
Violation of uncovered interest parity therefore have a potentially simple explanation in terms of risk premia. The problem is that the empirical relationship between interest differentials and currency appreciation is in the opposite direction to that predicted by uncovered interest parity. In a pooled time-series cross-sectional regression, currencies with high interest rates appreciate instead of depreciating. A whole investment strategy called the carry trade has been built on this observation. A risk based explanation of this phenomenon would seem to require implausible time varying risk premia. For example, if we interpret the pooled in terms of a single exchange rate (say dollar-euro), the risk premium would have to keep changing sign depending on whether the dollar interest rate was higher or lower than the euro interest rate.
This is where Hassan and Mano come in with a decomposition of the pooled regression result. They argue that in a pooled sample, the result could be driven by currency fixed effects. For example, over their sample period, the New Zealand interest rate was consistently higher than the Japanese rate and an investor who was consistently short the yen and long the New Zealand dollar would have made money. The crucial point here is that a risk based explanation of this outcome would not require time varying risk premia – over the whole sample, the risk premium would be in one direction. What Hassan and Mano do not say is that a large risk premium would be highly plausible in this context. Japan is a net creditor nation and Japanese investors would require a higher expected return on the New Zealand dollar to take the currency risk of investing outside their country. At the same time, New Zealand is a net debtor country and borrowers there would pay a higher interest rate to borrow in their own currency than take the currency risk of borrowing in Japanese yen. It would be left to hedge funds and other players with substantial risk appetite to try and arbitrage this interest differential and earn the large risk premium on offer. Since the aggregate capital of these investors is quite small, the return differential is not fully arbitraged away.
Hasan and Mano show that empirically only the currency fixed effect is statistically significant. The time varying component of the uncovered interest parity violation within a fixed currency pair is not statistically significant. Nor is there a statistically significant time fixed effect related to the time varying interest differential between the US dollar and a basket of other currencies. To my mind, if there is no time varying risk premium to be explained, the forward premium puzzle disappears.
The paper goes on to show that the carry trade as an investment strategy is primarily about currency fixed effects. Hasan and Mano consider “a version of the carry trade in which we never update our portfolio. We weight currencies once, based on our expectation of the currencies’ future mean level of interest rates, and never change the portfolio thereafter.” This “static carry trade” strategy accounts for 70% of the profits of the dynamic carry trade that rebalances the portfolio each period to go long the highest yielding currencies at that time and go short the highest yielding currencies at that time. More importantly, in the carry trade portfolio, the higher yielding currencies do depreciate against the low yielding currencies. It is just that the depreciation is less than the interest differential and so the strategy makes money. So uncovered interest parity gets the sign right and only the magnitude of the effect is lower because of risk premium. There is a large literature showing that the carry trade loses money at times of global financial stress when investors can least afford to lose money and therefore a large risk premium is intuitively plausible.
Sat, 09 Aug 2014
Last month, the Permanent Subcommittee on Investigations of the United States Senate published a Staff Report on how hedge funds were using basket options to reduce their tax liability. The hedge fund’s underlying trading strategy used 100,000 to 150,000 trades per day and many of those trading positions lasted only a few minutes. Yet, because of the use of basket options, the trading profits ended up being taxed at the long term capital gains rate of 15-20% instead of the short term capital gains rate of 35%. The hedge fund saved $6.8 billion in taxes during the period 2000-2013. Perhaps, more importantly, the hedge fund was also able to circumvent leverage restrictions.
The problem is that derivatives blur a number of distinctions that are at the foundation of the tax law everywhere in the world. Alvin Warren described the problem in great detail more than two decades ago (“Financial contract innovation and income tax policy.” Harvard Law Review, 107 (1993): 460). More importantly, Warren’s paper also showed that none of the obvious solutions to the problem would work.
We have similar problems in India as well. Mutual funds that invest at least 65% in equities produce income that is practically tax exempt for the investor, while debt mutual funds involve substantially higher tax incidence. A very popular product in India is the “Arbitrage Mutual Fund” which invests at least 65% in equities, but also hedges the equity risk using futures contracts. The result is “synthetic debt” that has the favourable tax treatment of equities.
In some sense, this is nothing new. In the Middle Ages, usury laws in Europe prohibited interest bearing debt, but allowed equity and insurance contracts. The market response was the infamous “triple contract” (contractus trinus) which used equity and insurance to create synthetic debt.
What modern taxmen are trying to do therefore reminds me of Einstein’s definition of insanity as doing the same thing over and over again and expecting different results.