Looking in the rearview mirror, it’s usually easy to see whether previously accepted risks were adversely realized. The moment you pay back the $20 I loaned you last month, for example, I know that the credit risk associated with our transaction has cured.
But what about model risk? Back in the days prior to COVID-19, banks developed and implemented myriad risk models and then made key business decisions using model projections. When COVID-19 hit, it became clear that most models missed their targets by considerable margins. At face value, therefore, it seems that model risks taken on in 2019 were adversely realized this year.
The problem with this assessment is that it conflates model risk and actual underlying risk – the notion that bad stuff sometimes happens to those with perfectly good models. It’s difficult to tell the difference between model and actual risk, even from the vantage point of hindsight. To explain why this is so, a thought exercise may be helpful.
Suppose we meet in a casino and I tell you I have a model that beats the roulette wheel, picking the correct color 90% of the time. I hand you a model validation report showing detailed back tests. The model says “red” and you bet accordingly.
The ball then lands on black.
So, my model has provided an incorrect prediction and you have lost some money. One possibility is that the model is bad – perhaps performing no better than chance. It’s also possible that my model performed exactly as advertised, your spin just happened to land within the 10% error rate. If the former assessment is true, you have identified an adverse model risk event – I’m either a charlatan or the condition of the wheel has changed since my backtests were performed. In the latter situation, you’re just unlucky.
The solution to this dilemma is to conduct a large number of repeated experiments, so long as it’s possible to do so. Coming back to banking, if the model is used to vet credit card applications, for instance, you could run the experiment 1,000 times, compare the results to a chance allocation (or to a challenger model) and calculate a precise dollar value of the model’s prior predictions.
Reality Check: The True Impact of Unpredictable Crises
In most critical banking applications, though, you only get to play the tape once. You have a single portfolio to manage (ultimately judged by aggregate performance measures) and a single timeline for reality with no sliding doors, no parallel universes.
Moreover, recessions are rare and, invariably, highly distinct and unpredictable. For problems like stress testing, loss forecasting, risk-appetite setting and capital allocation, you will never know whether you were a victim of reality or of a bad model. You may be able to reason that the model was substandard, that you could have done better given the information set available at the time, but you’ll never get a do-over with which to test your theories.
When viewed this way, there’s no way that COVID-19 was a model risk event. It is unimaginable that someone would have had a pandemic-ready loss forecasting model in 2019; if one had been presented to a risk committee, it would have been rejected on the grounds of overfitting.
Put simply, COVID-19 broke reality, not necessarily the 2019-era model cohort.
Rabbit-Hole and Model-Liquidity Risks
There are two other forms of model risk, exposed by COVID-19, that may have current and future implications. I’ll call these model rabbit-hole risk and model‐liquidity risk.
Rabbit-hole risk is the situation where modelers are compelled, perhaps by regulators, validators, model owners or their own convictions, to “chase” the data wherever it goes. The idea is to require models to explain and fit all past events, irrespective of the likelihood that these circumstances will ever recur.
The point here is that consumer and business behavior has been very unusual during the pandemic. Some are suggesting that a “new normal” will be forged in the COVID-19 furnace, but this is far from certain.
In any event, I doubt that this will change the principle that spells of unemployment are normally associated with elevated credit risk. Once the medical emergency has ended – with a vaccine, a virus mutation, mass mask-wearing, whatever – I suspect that behavior will revert to what we normally see on the back-end of deep recessions.
If this is right, forcing all risk models to fit 2020 data perfectly will be a fool’s errand. It will be very challenging to work out which COVID-19-era signals have relevance in a post-COVID-19 world, and which can be safely ignored. That said, in future years, I suspect that including COVID-19 dummies in risk models – a tacit admission that surreal behavior cannot be explained – will become something of a routine action.
In terms of model‐liquidity risk, let’s be clear that I’m talking about the liquidity of bank models, and not models of bank liquidity. This is the risk that because the build-validate-approve-implement process for new models is so time-consuming, banks miss the opportunity to develop useful model-based research that only has a short shelf life.
With the official models predicting wildly at the moment, banks are falling back to the standard protocol in such situations – liberal use of subjective management overlay. Personally, I’d prefer it if these overlays were informed more by data science than gut feelings – but this would require the use of rapidly commissioned and implemented models with a very short gestation period.
Given the unusual and transitory nature of our current situation, moreover, it is likely that models built with 2020 data will only be useful during 2020.
The irony of all this is that while model governance processes are designed to control model risk, their application exposes institutions to no-model risk – the risk you take when you should have a model to inform a decision but are actively prevented from using one.
If the COVID-19 crisis drags on, rabbit hole, model liquidity and no-model risks may interact in damaging ways. Desperate to have some sort of model to use now, banks may be tempted to kick off the full build process, thus chasing the 2020 data down the rabbit hole. This would be fine only if the build process, in the midst of a crisis, was quicker and cheaper, and the resulting models more liquid and disposable.
Regulators may wince at the prospect of rapid crisis-era model deployment, but – compared to the no-model alternative – it looks to me like a far safer option.