Can disruptive technologies ever truly and completely replace human judgment in finance?
We can begin our search for answers by considering automation progress in other industries. In the world of self-driving car technology, for example, it is common to talk of five levels of automation.
Level 3, which some manufacturers can currently claim, involves the car operating autonomously – but the driver must stay alert, ready to assume control at any moment. Level 4 is where the driver can take a nap in fair weather if they’re so inclined, but not if it’s snowing, for example, or if the roads are otherwise in poor condition. Level 5, the ultimate aim of researchers, completely removes the passenger from having to control the vehicle under any circumstances. (I like to joke that at Level 6 the car also determines the destination.)
Thinking about the increasing presence of machine learning as part of the credit assessment process, an interesting question is where we currently stand on this scale. Can we define Level 5 automation as it pertains to banking? Will this level of technology ever be attained by our artificial intelligence and machine learning researchers?
In consumer credit, I would argue that Level 3 automation has been with us for several decades. Especially after the introduction of the FICO score in the late 1980s, lenders adapted scoring technology to enable instant credit decisions in many cases.
A card issuer, for example, may automatically approve an 820 and instantly reject a 590. A “driver,” though, will “grab the steering wheel” if an applicant presents with a middling 680, prompting a more rigorous, mammalian evaluation.
If this card issuer can fine-tune the scoring process using machine learning, and thus squeeze the human out of this system, I would argue they will have attained Level 4 automation. Several fintechs are likely already close to achieving this.
To eliminate humans from the process completely, they would need to show, via a cost-benefit analysis, that the virtual loan screener – the score – is able to consistently generate higher profits or lower credit losses than its human counterparts. The learned machine may make mistakes, but it wins the contest if fewer bad decisions result relative to the system employing the humble homo sapiens.
It’s doubtful that this can become the norm in finance, as regulators may be loath to sign off on any process that involves no human intervention whatsoever. But why would this not constitute attainment of Level 5? What is the analog of a snowy road in the world of finance?
Humans vs. AI: The Recessions Factor
There are two main concerns we can identify: recessions and abrupt structural breaks.
The machine learning algorithm, you can concede, only has memory of the data on which it has been trained. If this database covers an expanding economy, and no recessions, the system may provide wonderful results in fair weather but fail when the economic worm inevitably turns.
People behave differently in downturns; they will often deleverage and will tend to take fewer risks generally. Any number of bad things can happen to people during these events, the likes of which are often vanishingly rare during expansions.
If the data extend back to, say, 2005, you may think you’re in the clear – the machine can learn from both the runup to and the consequences of the last, rather extreme, recession. The problem is that all recessions are different. The next one may resemble a historical example, or it might not. I’m unaware of any circumstance where consecutive recessions have had the same cause or effect.
Personally, I can recall only four recessions that have occurred in my lifetime, with three coming during my professional career. I was too young to have experienced the vagaries of the early 1980s recession, but I vividly remember that my father lost his job.
I have since learned that this was a high-inflation, supply-side recession that was very different from anything we endured in 2008. Having detailed knowledge of the late 2000s recession won’t necessarily prepare the robot to deal with something resembling a rerun of 1981.
Would you ride in a driverless car, in threatening weather, trained on just a single snowstorm?
Recessions are just too rare and idiosyncratic to enable artificial intelligences to cope with them in any general sense. Seasoned human beings have an edge in experience here; for this reason, I’m doubtful that claims of Level 5 automation in finance will ever be completely trusted.
The other problem is abrupt structural breaks – changes in the nature of observed relationships in the data – which may be caused, for example, by government legislation or sudden shifts in consumer tastes.
Self-driving technology, in one sense, is easier to program because cars invariably keep to the same side of the street and a red traffic signal always means stop. In lending data, relationships you identify today may not apply next year – the “rules” of human behavior and engagement are constantly changing.
Machine learning researchers in both the automotive and financial spaces are confident they can achieve a very high level of autonomy. Indeed, Level 4 seems well within reach in both fields. Nonetheless, those trusting their lives to autonomous vehicles, as well as those funding automated lending operations, should retain skepticism of new technology.
I’m not necessarily suggesting that human analysts will always be better than the machine at anticipating rule changes, but that it would be helpful to have two distinct sets of eyes – one literal, the other figurative – on the road during uncertain times. A well-built machine learning algorithm should be able to operate autonomously during more mundane times, but uncertain periods require a more experienced hand at the wheel.