The Real Causes of the Financial Crisis
The following is an excerpt from the most recent letter to the partners of Braewick Holdings LP. I'll be posting a presentation that goes along with this commentary shortly.
There have been many explanations thrown around of how we got ourselves into this mess. However, I have a slightly different take on what really caused the problem—and what hasn't been fixed yet.
The public always enjoys finding someone to burn at the stake. Who is to blame for the current mess? Many culprits have been named: greedy executives, The Fed, short sellers, Democrats, Republicans, CDO’s, CDS’s, real estate speculators, and so on. However, the real issues are more systemic. Derivatives and bad mortgages may be where the problem started, but in and of themselves, did not cause this mess.
In my view, there were three problems that led to the collapse of financial markets: misaligned incentives, poor risk management, and needless complexity. All three problems are not specific to the current crisis, but are widespread and must be addressed to avoid future dilemmas.
If You Give A Mouse a Cookie…
He’s going to want some milk. And if you give a Wall Street executive a $30 million bonus, he’s going to want a bigger one (and he’ll use the same methods to get it—after all, it worked the first time, didn’t it?).
Misaligned incentives were pervasive. And because incentives drive behavior, things are eventually not going to turn out as desired.
This occurred on almost every level. Executives were being rewarded for taking risks with only short-term payoffs. Mortgage originators made money based on the volume of mortgages given, not on their quality. Rating agencies getting paid by the companies they have to subjectively rate (causing a huge conflict of interest). Home buyers were incentivized to buy a house they couldn’t afford with low down payments, teaser rates, no-documentation Adjustable Rate Mortgages, and 0% interest.
Many of the asset managers (including banks, hedge funds, and insurance companies) were given incentives to take huge risks with the firm’s capital only because they produced outsized short-term gains. They were given very large bonuses or a cut of the profits with no downside risk (other than a pink slip and severance payment). There was no link between immediate actions and their future consequences.
Business managers and decision makers need to constantly look at how their incentives are structured. People will naturally do things in their own self-interest, and managers need to react accordingly. Some incentives aren’t so obvious—giving a trader a cut of the profits may not sound bad at all. But even if it’s not their intent, people usually end up gaming the incentive in their favor. If traders aren’t punished for taking long-term risks in pursuit of profit, then guess what—that’s what they’ll do.
This is why incentive structures need to be carefully examined so that they are in the best overall interest of the firm. (Which, in the case of financial institutions, is long-term survival while earning adequate returns for shareholders).
Here are a few suggestions for the executive compensation problem: A rising tide lifts all boats, and executives shouldn’t be paid simply because times are good. Executive stock options need to be tied to an industry or overall market metric. Even better, they should be given shares or restricted shares in the company in place of large bonuses. This gives the manager at least some long-term downside.
Of course, share ownership alone isn’t the answer. In giving short-term compensation, preservation of capital should be the primary motivator. There should be NO severance, and claw-back provisions where if things turn for the worse, previous bonuses can be withdrawn. However, there’s no perfect solution.
So don’t give a mouse a cookie—unless he really deserves it.
Don’t Worry, the Odds are Extremely Low
Financial institutions weren’t managing risk correctly. Sure, they had “Risk Management” departments. But they failed to adequately judge or protect against certain risks. This failure was magnified when institutions borrowed up to thirty-times their net worth.
Moral hazard is one issue with modern risk management. The ability to hedge a multitude of risks or to transfer those risks to other institutions is a relatively new phenomenon. In my view, this ability made many companies complacent and led them to take on too much debt, keep too little capital reserves, and seek higher returns by investing in shoddy assets.
One system that many financial companies use for risk management is Value at Risk (VaR). Basically, it’s a complex form of scenario analysis that tries to give the firm a look at how much risk is being taken. A single scenario would be something like this: “How much would our portfolio be affected if the stock market went up 5%, interest rates declined ½%, and oil prices fell 3%?” The answer would be found by looking at historical data on performance and correlation. But that’s just one scenario out of a whole lot (technically infinite). The end result of the VaR analysis gives you a dollar amount of loss for a certain percentile. It gets the final figure from combining all the different outcomes and probabilities of every scenario. The result looks like this: “The 98th percentile, one-month VaR is -$20 million.” (Meaning that 98% of the time, your holdings won’t lose more than $20 million in a one-month period.)
There are a few problems with the VaR system. The first: humans are inherently good at judging some risks, and inherently bad at judging others. We’re fairly good at estimating the odds of common events, like when something happens 20% or 90% of the time. This is due to our natural ability to quickly assess risks based on our past experiences. But when it comes to finding the odds of extremely rare events (i.e. a 1-in-100 year occurrence), our natural abilities fail us. Sophisticated statistical models don’t provide much relief either. Although we can’t predict the frequency of rare events, VaR is built on this very ability.
The second: the end result of any model is dependent upon the original inputs. Garbage in, garbage out. (Or in the eloquent words of Charlie Munger, “if you mix raisins with turds, you’ve still got turds.”) Using historical data to calculate future returns and probabilities can be extremely dangerous. Despite the fact that data for some financial instruments only goes back a few years, VaR practitioners were using it to find the odds of once in a century events. If you look in the rear view mirror long enough, you’ll eventually hit a brick wall.
Finally, and most importantly, the final VaR figure ignores magnitude—or the maximum size of losses. There may be only a 2% chance that the portfolio loses more than $20 million in a one-month period. But that loss could be $21 million, or it could be $500 million. If the later wipes out the equity of the firm, then it’s game over.
Banks and financial institutions weren’t the only ones who bought into the VaR model. Regulators and rating agencies used the same analysis to ensure that the company had enough capital on hand or that it still deserved its triple-A rating. This type of backward-looking, false-precision risk analysis must be stopped to prevent future disasters.
Tight Coupling and Interactive Complexity
One of the biggest systemic problems in the financial markets is complexity. It has been getting worse in recent years with the creation of derivative instruments and rapid consolidation of the industry.
To help explain the problem, I’ll borrow a concept from Richard Bookstaber’s recent book “A Demon of Our Own Design.” Engineers use the terms tight coupling and interactive complexity to describe how certain systems function and interact. When applied to the financial markets, they do an excellent job at explaining why the system is a disaster waiting to happen.
Tight coupling is where every component of a certain process is tightly linked, with little room for error. Bread making is tightly coupled—once the yeast is added, the remaining steps must be followed using precise methods and timing. Each action of the system immediately triggers the next. A basic assembly line isn’t very complex, but it is tightly coupled. If one part of the process is stopped, the entire system freezes and all the widgets pile up behind it (think of Charlie Chaplin in Modern Times). There’s no slack in the system, and no point to intervene if there’s a problem.
Interactive complexity is when a system is not only complex, but has components that can interact in unexpected or varied ways. There are non-linear interactions and feedback loops that occur within the process. A university is one example. There are many different moving parts with all the students, teachers, and departments that have to interact and eventually fit together. Schedules can conflict, but everything usually works out as the structure is loosely coupled. A problem that could be easily addressed in a normal system may quickly spiral out of control in an interactively complex system. However, as long as the system isn’t tightly coupled, the problem can eventually be fixed because there is the time and flexibility to solve it.
But when a structure is both tightly coupled and interactively complex, it can be a formula for disaster. Nuclear reactors are a prime example of a system that shares both attributes. The components that monitor and control a reactor are extremely complex, and at times can interact in a completely unpredictable manner. A problem in one of these tightly coupled processes can quickly lead to the radioactive material becoming unstable. This can cause a chain reaction that ends up destroying the reactor and poisoning the surrounding area. Sound familiar? Modern financial markets are another perfect example of this hazardous combination.
So how do we fix the problem? Simply regulating the current complexity won’t cut it. In fact, it can make markets even more complex and thus add to the problem. Financial markets will never be simple. However, less complexity can do a lot to not only mitigate a crisis but make it easier to solve. There are a few ways to accomplish this: less leverage, simpler financial instruments, and more organizational redundancy.
Less overall leverage makes the system more loosely coupled (and hence more flexible). Simplifying financial instruments is an obvious answer to less complexity. Although derivatives may not have been the sole cause of the crisis, they certainly didn’t help. A small bank with nothing but simple loans and mortgages on its balance sheet is much easier to deal with and understand. It doesn’t solve everything, but keeps things more linear and less complex.
Organizational redundancy is necessary to reduce the inter-connected nature of the financial markets. Redundancy is another engineering concept where components of a system are duplicated for backup and increased reliability. Redundancy is forced out when financial companies merge and create complex, bureaucratic organizations. (Think of the massively complex Citigroup holding company—with $2 trillion in assets and over 350,000 employees.)
Consolidation has been increasingly common in the last twenty years. In the midst of the current crisis, financial companies have been merging at an even greater pace to prop each other up. Bank consolidation may be a temporary solution, but in the long-term, many small banks are much better than a few huge financial supermarkets. For a large bank conglomerate, if there is a problem in one segment, the entire business could be in jeopardy. It can make things exceedingly unpredictable, which is the very nature of an interactively complex system.
* * *
In conclusion, as long as the issues of misaligned incentives, poor risk management, and needless complexity are resolved, there will be more problems with our financial system in the future. They may come in different forms, but the outcome will be similar.
SOURCES:
“A Demon of Our Own Design”, by Richard Bookstaber
“Plight of the Fortune Tellers”, by Riccardo Rebonato