Categories
AI/ML Model Risk & Validation

Crisis and Post-Crisis Risk Modeling: Weighing Machines vs. Humans

When it comes to both the present and the future, which financial institutions are in a better position: those that rely on traditional, structural credit risk models or those that lean toward machine-learning models? Accuracy, intuitiveness, adaptability and behavioral shifts are among the factors that should be evaluated when answering this question – but we must first consider how different types of models have performed in 2020.

During the pandemic, the difficulties faced by users of machine-learning models have been described by many commentators. The notion is that algorithmic methods, trained on pre-pandemic data, have been unable to cope with the ructions brought about by the pandemic.

Toyn Hughes headshot
Tony Hughes

It is therefore implied that traditional prediction methods, built around the human capacity for higher reasoning, are better suited to dealing with crises like COVID-19. It follows that while it may be safe for artificial intelligence to take charge of forecasting responsibilities during expansions, a human touch is needed to navigate the stormy seas of recession.

As a human being myself, I have some sympathy for this line of reasoning. In a world where AI is stealing jobs in many fields, it’s comforting to think that prediction robots may still need our help during periods of economic upheaval.

The problem, though, is that while COVID-19 has certainly been confusing for AI engines, the human brain has also struggled to cope with many of the changes wrought by the pandemic. If a forecasting competition had been scheduled for, say, April 2020 – the winner decided solely by out-of-sample mean squared error- are we really sure that Team Humanity would have prevailed?

Perhaps so, but who’s to say that our streak will continue through the next crisis? Technology and data availability are expanding so quickly, it seems likely that well-designed machines will learn more from the current crisis than people will. Remember that the robots don’t need to forecast future crises perfectly to render humans redundant, they just need to do a slightly better job than we can.

Evaluating the Modeling Playing Field

Accuracy, of course, isn’t the only criterion when assessing the utility of risk modeling systems.

Let’s compare the behavior of a machine-learning model with that of a structural model in the context of COVID-19. Assume that both were stable and performing well on the eve of the pandemic.

The structural model, we suppose, was correctly specified, intuitive to key stakeholders and properly estimated. We further presume that the black box algorithmic model had a slight edge in predictive accuracy before the crisis, and that it retained this advantage when the pandemic hit.

In this scenario, a primary advantage that the structural model held prior to the crisis has been nullified. The COVID-19 new normal has triggered a change in both the specification of the best model and a shift in its underlying parameters, meaning that the intuitiveness of the pre-crisis structural model has been lost, at least temporarily.

In terms of the machine-learning model, which was not built with intuitiveness in mind, we have lost nothing. In relative terms, therefore, the utility of the opaque AI methodology has increased as a direct result of the confusion caused by the crisis.

A Modest Nod to AI

The next advantage held by machine-learning models pertains to the looming post-COVID-19 transition, which has been accelerated by recent positive news on the vaccine front.

When the dust settles, structural modelers will need to assess whether our experience this year should be represented as a structural break, where the behavior of subjects is assumed to have changed permanently, or a regime switch, which is a temporary change followed by a reversion. A partial reversion, combining elements of both features, is another likely modeling complication.

Specifying these features correctly will be a major challenge for structural modelers. They will need enough post-pandemic data to identify any model specification or parameter shifts, as well as a watertight intuition to explain what has changed since the good ol’ days. If the correct mechanism turns out to be a regime switch, the task will be easier, because pre-coronavirus data will provide a stronger signal for post-pandemic prediction.

Modern, dynamic machine-learning models are specifically designed to cope with these types of behavioral shifts. While we use structural models to try to understand the changes occurring in the economy, machine-learning techniques seek only to identify that they have occurred and to measure the impact of changes on subsequent predictions. The algorithmic methods will need a fair bit of post-crisis data to calibrate correctly, but the process itself should proceed quite naturally.

Some problems associated with machine learning, like the challenges in ensuring fair outcomes for consumers, were unaffected by the COVID-19 experience. In terms of the features that shifted as a result of the pandemic, though, I suspect that the playing field has actually tilted slightly in favor of machine learning.

This conclusion is predicated on a continuing accuracy advantage for algorithmic modeling systems.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s