Consider a new pandemic scenario. In 2023, a pathogen will be discovered that, relative to COVID-19, is twice as deadly and five times as transmissible. It quickly becomes clear that vaccine development will be much more difficult than it was in 2020. Scientists estimate that 40% effectiveness is the best we can hope for, and that it will take at least three years to reach this level of development.
Honestly, this isn’t worthy of deep thought.
That said, if asked to analyze a highly-unlikely scenario like this, I suspect that most risk teams would take it in stride. While the parameters are well outside the domain defined by COVID-19, analysts at least have some data with which to extrapolate to the more extreme event.
If they’d been asked to run this scenario in 2019, as some have belatedly advised, there would have been very little data on which to base any analysis. Historical records of the 1918 pandemic offered some useful clues about social distancing and mask wearing, but little about the effect of pandemic lockdowns on the performance of modern financial institutions.
I well remember my own muddled thoughts in the early days of the pandemic – that North American and European governments would never implement strict Chinese-style lockdowns, that the stock market would tank and take years to recover, that such a huge disruption in daily commerce would lead to high default rates in corporate lending and mortgage portfolios. I was not alone in thinking these things, all of which turned out to be spectacularly wrong.
The simple fact is that in 2019, a global pandemic scenario was unprecedented in living memory; in 2021, this is no longer the case. From a risk management perspective, having direct experience of comparable events and an infusion of relevant data changes the game completely. With these advantages, it’s possible to calibrate and benchmark the projections. Without them, there’s no way to tell whether your forecast is better than mine or whether we’re both completely crazy.
COVID-19 Versus Subprime
It’s interesting to compare these events to learnings gleaned from the previous recession, when losses on subprime mortgages morphed into the global financial crisis of 2008/09. There were certainly elements of that crisis that were unusual – e.g., the rise of CDOs and of the originate-to-securitize business model for mortgages; the worldwide nature of the preceding housing boom; and the fact that the boom and bust both took place during an era of weakened regulation and exceptionally low interest rates.
But other aspects of the event had echoes through history. Japan had experienced a monumental asset price boom in the late 1980s that bore many of the same hallmarks as the subsequent subprime crisis. Housing market booms and busts, moreover, have always been quite common at a local level – for example, Los Angeles in the late 1970s and Toronto in the late 1980s. We can even go back to the spectacular run-up in Chicago land prices that occurred during the heady days of the 1830s.
In simple terms, subprime was bigger than any mortgage crisis that came before, but it was not unprecedented. In the lead-up to the 2008 recession, it was possible to build reasonable models showing large increases in mortgage default; indeed, many analysts loudly rang warning bells about a housing bubble but were drowned out by the greater volume of the bull-run cheerleaders.
To run a sensible mortgage market stress test in 2006, all that was needed was to listen carefully to the naysayers (whether you agreed with them or not), while also considering the ramifications of the heterodox models showing more widespread house price declines and higher credit losses than the then-standard approaches.
Of course, it must be noted that just because a scenario has some sort of historical precedent, it does not necessarily follow that forecasts will always be accurate. History, after all, never repeats, but it is all we’ve got.
Pure Reason vs. Empirical Analysis
There are many scenarios that are unprecedented but likely to occur at some point in the future – including those that are potentially highly-adverse to the financial sector. The most glaring is a large-scale cyberattack that triggers a Y2K-style failure of global computer networks. It is entirely reasonable to believe that such an event would add risk to the global financial system, but it is almost impossible to work out exactly how much.
If I were to suggest that such an event would cause only a small blip in bank financial performance, you might dismiss my projection, because you consider it to be ridiculously optimistic. If I’d made a similar prognostication on the eve of the pandemic, my prediction would most likely have been rejected along similar lines. In the latter case, of course, my seemingly ridiculous forecast actually would have turned out to be rather prescient.
And that’s the big problem with stress tests of unprecedented events: there’s simply no way to tell whose views are reasonable and whose are not. Our early pandemic forecasts amply demonstrate that it is exceptionally difficult, perhaps even impossible, to forecast economic outcomes or bank portfolio performance using pure reason alone. Empirical analysis, which is available when events have at least some historical precedent, provides a starkly superior stress testing solution.
So, is it even worth developing stress tests for unprecedented events? The stakes at play in something like a cyberattack are so great that accurate stress analysis will no doubt be craved by regulators and senior risk managers. The projections provided, however, will almost invariably be of little practical use as events unfold.
In other words, feel free to demand a stress test, just don’t expect it to be particularly useful.