Sep 9, 2014

A How-to Guide to Quantifying Model Risk

As regulatory pressure continues to increase, institutions have elevated the importance of managing their model risk. Regulatory focus has largely shifted to the “how”— regulators want to know how the models work and how to approach these models.

During a recent webinar, Dr. David Eliezer, Vice President and Head of Model Validation at Numerix teamed up with Sidhartha Dash of Chartis Research to discuss the latest industry and regulatory developments in model risk management and some best practices for quantifying and monitoring model risk.

Watch the related on-demand webinarChartis Research & Numerix Co-Presentation: Best Practices in Model Risk Management”

Although institutions are approaching model risk governance and management with different goals in mind, everyone shares one common belief: model risk measurement and mitigation is now a central operation in any financial organization.

This increased regulatory focus on the “how” means that financial institutions and banks need to explain the details of their models in order to provide evidence of the models’ functionality – in other words, they need to show regulators how, and how well, their models are working. For many institutions, this means rethinking their approach to model risk.

Central to this reassessment is standardizing quantification: how models are tested, validated, reviewed, and managed. “If it can’t be quantified or measured, then it’s not worth doing.” Sid Dash’s conclusion sums up the importance of standardizing quantification.

A strategic redefinition of model risk needs to be made, and that can only be done by creating a standard communication network. This means having a discussion that looks at model risk not only from a quantitative and numerical perspective, but also from a business perspective.

Below, Dr. David Eliezer argues for just such a framework for model validation of derivative products.

Dr. David Eliezer:

Model Risk” is a tricky idea to get one’s head around, since we normally have to choose a model first, in order to compute any sort of risk. How does a model compute the risk that it, itself, is wrong? In this talk, we address this problem. 

We address this problem in the context of derivative pricing specifically. Derivatives are only one of several types of assets that a bank may hold, and so we are only addressing a part of the problem. However, derivatives are also the most abstract in their pricing, the hardest to understand, and so we hope that our effort helps to slay one of the most troublesome problems in the risk world, the biggest and baddest risk dragon.

Seeking a Definition of Model Risk

Model Risk needs to be quantifiable to be useful, so we have to think about what kind of quantity we need Model Risk to be. In what kind of units should it be measured? Let us think about how it is used.  Model Risk must be aggregated with other risks, by regulators and auditors, to evaluate a total risk of the bank’s portfolio. Furthermore, risk numbers are often computed in ratios with returns, as in the Sharpe Ratio. This suggests that Model Risk should be measured in either currency units or log currency units, just as with market risk.

Market risk for a financial instrument can be thought of as a range of plausible future prices. We shall seek a definition of model risk that is, in some form, a range of reasonable prices. Whatever this new definition is, it has to give stable answers in time—that is, it must give similar answers each time we go back to test it.

One of the simplest ways we can estimate the range of reasonable prices is by conducting a survey of all the market-standard pricing methods. Naturally, this can only be done if a benchmarking library of sufficient richness is available. It also relies on the assumption that standard market models explore the full range of reasonable prices, which is not necessarily true. And this often requires human judgment to set up, so that this method can still draw heavy criticism from regulators. This shows that it is not completely convincing.

Measuring Model Risk: Leveraging the Standard Deviation of the Cost of Hedging

There is a better method with a rationale that can be defended through arbitrage. This comes to us directly from option pricing theory, the standard deviation of the cost of hedging. This measure, first suggested by Bakshi, Cao, and Chen in 1997, is a more systematic approach that measures the properties of the model that practitioners care most about – the ability of the model to reliably predict the trader’s hedging costs.

This measure fits nicely with the conceptual definition worked out at the beginning of this talk, the range of reasonable prices. In the ideal case, the option price is identified with the cost of hedging, which is riskless, and has zero variance. In real life, it is not riskless, but the risk is greatly reduced compared to unhedged option returns. We can calculate the cost of hedging on many paths, to produce a distribution of the cost of hedging. This distribution will have a peak, in which most of the probability is concentrated. 

A price offered outside of this peak is a price that can be arbitraged successfully with high probability, by using the model under consideration to replicate the option. Thus, the width of this peak in the probability distribution, i.e. the standard deviation of the cost of hedging, is the range of reasonable prices which we identified with our measure of model risk, and it is indeed enforceable by arbitrage. This support by arbitrage-trading operations is what gives this quantity its power, as a measure of model risk.

A Measure of Model Risk: Take a Closer Look

This model risk measure, the standard deviation of the cost of hedging, derives its power from the fact that it takes advantage of the properties of the models that are specific to option pricing, instead of treating the model as a generic unknown function. Use of these very problem-specific properties makes the test tighter and more data-efficient.

Note also that this measure of model risk incorporates all of the modeling assumptions, including calibration choices, data choices, hedge instrument choices, etc., so that if poor choices are made in these areas, it is evident in the computation. Even so, this measure requires no customization to either models or payoffs, so that the same testing code can be used across all derivatives in the bank’s portfolio. Note also, that the standard deviation of cost of hedging measure is automatable, and requires no spreadsheet work, or other “by hand” calculations.

We should pause to note that this method requires the generation of thousands of historical paths, similar to those used in historical VaR simulations. Generating many historical paths from the one true historical path we have, requires the path construction algorithm to make some form of a stationarity assumption. This means that the test must to be run with a variety of different assumptions in order to prove that our conclusions do not depend heavily on this assumption.

A by-product of the computation of the cost of hedging distribution that we may easily compute is the daily hedge error distribution. This is another very valuable distribution, which lets us evaluate the ongoing performance of the model, after the initial validation of the model has taken place. It allows us to compute the likelihood that any particular hedge error that we realize in a day’s trading is consistent with the model. Extremely small probabilities found from this calculation suggest that the model may be losing its validity. Thus, it may be used to trigger an inquiry into the current validity of the model’s assumptions. 

For more information, watch the related on-demand webinar: Chartis Research & Numerix Co-Presentation: Best Practices in Model Risk Management.”

See below for additional resources on this topic:

Need Assistance?

Want More From Numerix? Subscribe to our mailing list to stay current on what we're doing and thinking

Want More from Numerix?

Subscribe to our mailing list to stay current on what we're doing and thinking at Numerix

Subscribe Today!