The End of Insurance

Plus: More China "Fintech", Scale in Asset Management, Billion Dollar Whale

Hello and welcome to another issue of Net Interest, my newsletter on financial sector themes. If you’re new here, thanks for signing up. Every Friday I go deep on a topic of interest in the sector and highlight a few other trending themes below. If you have any feedback, reply to the email or add to the comments. And if you like what you’re reading, please do share and invite friends and colleagues to sign up.

The End of Insurance

Daniel Bernoulli wasn’t even the best mathematician in his own family. But his contribution to the study of risk is profound. 

In his essay of 1738, Exposition of a New Theory on the Measurement of Risk, Bernoulli applies his mind to the question of insurance. The basics of probability theory had already been unearthed, but Bernoulli was puzzled by how insurance fit into its framework.

His thinking goes like this: to be viable, an insurer would need to charge a premium at least equal to the expected value of any claims that may be made against it; to lock in a profit margin, the insurer would have to charge more. Yet why would a buyer want to pay more than the expected value of their losses? An insurance contract is zero-sum—one party’s loss is the other party’s gain. Which means that in a world of economically rational people, there isn’t a price at which insurance makes sense. 

Clearly, empirical evidence is such that insurance does make sense. Today, buyers spend $6.3 trillion a year on insurance premiums.

Bernoulli proposed a fix that became a cornerstone of economic thinking for the next two and half centuries. He suggested that rather than consider profit and loss in absolute terms, the parties consider the utility of their profit and loss. Because their utility functions don’t have to be the same, a price emerges at which both parties are comfortable entering an insurance contract. Bernoulli suggested that such utility was a function of how much wealth each of the parties had in reserve. The greater the wealth of the insurance buyer, the lower their propensity to buy insurance. Meanwhile, the greater the wealth of the insurer, the greater their propensity to sell insurance: “A man less wealthy than this would be foolish to provide the surety, but it makes sense for a wealthier man to do so.”

It wasn’t a particularly useful theory and in 1979 it was famously revised by Daniel Kahneman and Amos Tversky. [1] But it captures why insurance companies are big.

Insurance is predicated on two principles:

  • Law of large numbers. If you repeat a random experiment often enough, the average of the outcomes will converge towards the expected value. A larger book of business therefore confers greater loss clarity to an insurer than a smaller book. The greater the number of similar risks the insurer can bring together, the higher their confidence at predicting overall claims. 

  • Risk pooling. For typical insurable risks, the frequency of claims is low and an insurer can spread losses suffered by a few policyholders across a large group of similar policies. A hundred people of similar risk profile each paying a premium of $200 a year can cover the risk of any one of them losing $20,000.

These twin concepts give an insurance company the grounding to offer policyholders risk transfer at an acceptable premium. They allow the transformation of unknown individual uncertainty about the future into a measurable aggregate risk. 

Except now we have data. Lots and lots of data. Data can be captured on a much more granular and personalised level of detail. This means that rather than rely on a collective approach to risk, insurers can underwrite to specific individualised risk.

Enter Root

Next week Root goes public. We discussed the company in More Net Interest a couple of weeks ago. It’s an insurance company focussed on the US auto market. Their view on the twin principles of insurance goes like this:

“For centuries, traditional insurance companies have grouped people into risk pools and long relied on the ‘law of large numbers’ to produce acceptable pricing on an aggregate basis. Fairness at the individual level has been largely ignored. Root is different—we use technology to measure risk based on individual performance, prioritizing fairness to the customer.” 

The company relies on data to predict the probability of a customer suffering a loss. It gathers this data from customers’ mobile phones, picking up braking and turning speeds, miles driven, phone usage while driving and more. Its app picks up over 200 factors in total, which add to other, traditional factors like age, gender and ZIP code.

In the traditional model, risk pools would be formed along a set of rate factors, with each pool corresponding to a certain combination of rate factor categories (or intervals where the factors are continuous). Within each pool, actuaries analyse historic claims data to arrive at an estimate of the minimum payment per policy required to cover expected losses. In order to overcome adverse selection – where policyholders take advantage of an insurance company that has failed to price risk correctly – a bonus system is incorporated to reduce the premium for policyholders with a good claims history.

Root characterises this process as one of correlation and not causality, and it “strive[s] to price based more on causality than correlation”. So it mines driving performance data for signals of cause. It claims to have data from over 10 billion miles of driving and hundreds of thousands of claims. Although telematics technology has been around for many years, the company reckons that only now, through mobile phone deployment, has it become scalable. Its predictive analytics are a work-in-progress, but it reckons that the worst 10-15% drivers it screens are two times more likely to get in an accident than its average targeted customer. 

Various third party vendors sell similar data:

  • Octo Telematics claims it has 248 billion miles of driving data collected and over 464,000 insurance events analysed;

  • Cambridge Mobile Analytics partners with State Farm, Liberty Mutual, Nationwide and other leading insurers and has Softbank backing;

  • Greater Than is a listed Swedish technology company which has analysed 750 million driving profiles. It claims its AI “predicts accidents before they happen”. 

Taken to its extreme, individual risk profiling undermines the very need for insurance. If accidents can be predicted before they happen, they become uninsurable—like an insurance underwriter’s Minority Report. More realistically, predictive analytics could lead to materially higher rates for riskier consumers, making insurance unaffordable for them. Insurance was built upon the recognition of the irreducible opacity of individuals; behavioural data offers to lift that opacity.

For various reasons, we’re not yet at that extreme. Most insurance companies use behavioural data as a complement to their traditional methods of pricing, rewarding customers with a discount on the traditional tariff. Even Root, despite its disruptive credentials, isn’t there yet:

“Over time we hope that we can replace all correlation-related inputs to our pricing model, such as credit scores, with a fully behavioral pricing model.”

The company recently announced an initiative to remove credit scoring from its underwriting criteria by 2025. In the meantime, its pricing still relies heavily on it, together with other traditional factors like age, gender and ZIP code. An analysis of Root’s regulatory rate filings shows that its premium has historically tracked Progressive’s. Its premium per policy isn’t that different either, so it doesn’t look like it’s picking up analytically lower risk customers by offering them a fairer price. 

Nor are the benefits of predictive analytics visible in its loss trends. The company reports losses of around 100 cents on every dollar of premium earned. That compares with peers who lose around 70 cents per dollar. Root claims that long-standing customers have better loss ratios. But right now it is struggling to retain customers. Its one-year retention rate is around 38% which compares with 60-85% among peers.

There’s an irony that a company presenting itself as a technology company – where value typically accrues to high retention and correspondingly high customer lifetime value – has retention rates (and by implication customer lifetime values) lower than the incumbent industry from which it wants to distance itself. [2]

There are a number of reasons why, despite the best intentions of Root, we may be years away from insurance existentially disrupting itself.

The first is the same constraint that many technologies find themselves under—battery life. As mobile phones suck behavioural data out of their users and upload it to Root HQ, their batteries drain. This limits the volume of data that can be pulled without compromising customer satisfaction. Increasingly, connected vehicles are hitting the roads with a built-in capacity to upload data directly, yet they lack the ability to assess phone usage while driving.  

Second is that there are some regulatory implications. Policymakers are concerned about data privacy (a risk factor cited by Root) and also about discrimination. Discrimination applies under current standards, too—European legislation prohibits the use of a gender rating; but if algorithms reverse engineer gender, that could be seen as problematic. 

Finally, driver behaviour may not be as consistent as fixed variables like age and address. Root underwrites policyholders after observing their driving patterns for two to four weeks on a test drive. Yet six months later, it doesn’t offer renewal policies to a third of policyholders, indicating a change in behaviour between the first two to four weeks and the next six months. Those customers who do pass the underwriting test may be put off if their rate increases without the incidence of a claim. 

There are many advantages to personalised pricing. It’s fairer, as Root highlights. In addition, instant feedback on risk scoring can influence behaviour positively, reducing the scope for accidents. The transaction becomes one of prevention rather than insurance. 

However, there’s also a darker side. In her book, Weapons of Math Destruction, author Cathy O’Neil cautions how big data can increase inequality across societies. On personalised pricing in insurance, she says:

...surveillance will change the very nature of insurance. Insurance is an industry, traditionally, that draws on the majority of the community to respond to the needs of an unfortunate minority. In the villages we lived in centuries ago, families, religious groups, and neighbours helped look after each other when fire, accident, or illness struck. In the market economy, we outsource this care to insurance companies… 

As insurance companies learn more about us, they’ll be able to pinpoint those who appear the riskiest customers and then either drive their rates to the stratosphere or, where legal, deny them coverage. This is a far cry from insurance’s original purpose, which is to help society balance its risk. In a targeted world, we no longer pay the average. Instead, we’re saddled with anticipated costs. Instead of smoothing life’s bumps, insurance companies will demand payment for those bumps in advance. This undermines the point of insurance, and the hits will fall especially hard on those who can least afford them.

Lemonade, in its listing prospectus, provides a potted overview of the history of insurance. They talk about the emergence of insurance dynasties that have reigned since the time of Bernoulli. They conclude: “A new revolution now threatens these hegemons.”

Beware the revolution.

[1] Ole Peters and Alexander Adamou present an alternative theory derived by addressing the concept of ergodicity. They argue that time needs to be incorporated into the model. Rather than thinking in terms of expected values, which reflect an average over parallel universes, parties to the contract think about time average growth rates of their wealth. This leads to a different calculus, in which the contract is not zero sum. 

[2] Embedded value accounting emerged in the insurance industry in the 1990’s and in some ways is the precursor to customer lifetime analytics popular more broadly today. The key input to embedded value is the present value of in-force business. The analysis recognises that in a typical cashflow profile, losses are incurred in the first year which are recovered by profits in the later years of a contract. Insurance startups like Bought by Many and Next Insurance target LTV/CAC ratios of 3.0x which is a new way of reflecting an old insurance idea. 

Forwarded this? Sign up here

Share Net Interest

More Net Interest

More China "Fintech"

Another week, another fintech IPO. This one’s Lufax, a platform lender that offers large-ish loans to owners of small/medium-sized businesses in China. The hosts of Tech Buzz China do a good job laying out the history and business model of the company. Lufax is a survivor of the peer-to-peer lending wave that was one of the quickest boom-bust cycles in financial history. In 2015 there were over 3,500 peer-to-peer lending companies operating in China. The following year the government clamped down and they began to fold, abscond or exit. Today, there are only around three dozen remaining. One of them is Lufax.

[Source]

Not that it does much peer-to-peer lending anymore. In 2016 Ping An injected a new business, Puhui, into Lufax which is the core of the business today. That business was founded in 2005, and it’s not that techy. Although the company argues that “we apply advanced technology, including big data, AI and blockchain technology” it employs 85,000 people, over three quarters of whom work in sales and marketing (including 4,000 in telemarketing). Consequently, it has few of the economics of a classic fintech business.

Scale in Asset Management

We’ve talked before in More Net Interest about the role of scale in the asset management industry. Blackrock is now a $7.8 trillion business. One of the consequences is that it makes less sense for banks to hold on to their subscale asset management arms. Blackrock itself grew partly by picking up asset management business being offloaded by banks like Merrill Lynch and Barclays. Since then, many European banks have sold, including Deutsche Bank. Now Wells Fargo is said to be selling its asset management business. The firm manages $607 billion, so it’s a fair bit smaller than Blackrock and has been seeing outflows for the past few years. It may get sold to a private equity firm, but if it goes to an incumbent, it’s another step in the march to scale.

Billion Dollar Whale

One of the most entertaining books I’ve read in the past two years is Billion Dollar Whale, by Tom Wright and Bradley Hope. It’s the definitive account of the 1MDB scandal, about which a new chapter was written this week. Goldman Sachs, heavily featured in the book, reached a settlement with the US authorities over its dealings with 1MDB that will cost it $2.9 billion, in addition to $2.5 billion it has already paid to the Malaysian authorities. Total fines amount to 8x the fees Goldman earned from doing business with 1MDB. 

The Goldman Sachs board’s press release rattles off a list of culpable parties. Three are name checked for their criminal role in the saga, and various former senior executives are acknowledged to have had a hand in institutional failures and will be asked to give back parts of their bonus from prior years. Nobody comes out looking good.

Except perhaps for one man—Jordan Belfort, the Wolf of Wall Street. 

Himself no stranger to fraud, Jordan Belfort thought something wasn’t right about this setup...

“This is a fucking scam—anybody who does this has stolen money,” Belfort told Anne [his girlfriend], as the music thumped. “You wouldn’t spend money you worked for like that.”