When Life Insurance Gives You AI, Should You Make Lemonade?

A growing trend concerns insurance companies using artificial intelligence in their operations. Advertisements about those methods often mention how customers can sign up for policies faster, file claims more efficiently, and get 24/7 assistance, all thanks to AI.

However, a recent Twitter thread from Lemonade — an insurance brand that uses AI — sheds light on this practice’s potential issues. People saw it, then decided the Lemonade AI approach highlights how technology may hurt and help, depending on its application.

Twitter Transparency Raises Alarm

Many companies don’t divulge details about how they use AI. The idea is that keeping the AI shrouded in mystery gives the impression of a futuristic offering while protecting a company’s proprietary technology.

When Lemonade took to Twitter to give people insights into how its AI works, the communications began by explaining how it uses information. For example, a Lemonade tweet confirmed that it collects approximately 100 times more data than traditional insurance companies.

The thread continued by explaining how the company’s AI chatbot asks customers 13 questions. While doing so, it gathers more than 1,600 data points. That’s compared to the 20-40 other insurers get, the tweet continued. The company uses this information to gauge a customer’s associated risk, which helps Lemonade lower its operating costs and loss ratio.

The fourth tweet of the seven-message string entered into even more eyebrow-raising territory, suggesting that the Lemonade AI analytics detect nonverbal cues associated with fraudulent claims. The company’s process involves customers using their phones to shoot videos explaining what happened.

Twitter users questioned the ethics of that approach, pointing out the problems with unaccountable computers making decisions about life-altering claims, such as those for burned-down houses. One called the practice “an even more overtly pseudoscientific version of a traditional lie detector test.”

AI Makes Mistakes, Too

Fraud detection extends beyond insurance AI methods finding suspicious signs and patterns. For example, many banks use it to flag strange charges. However, the technology could misperceive situations — and it does. Even the most skilled programmers cannot complete flawless work.

Most people occasionally face the embarrassing situation of trying to buy an item and hearing the cashier tell them the transaction failed, even though they had plenty of money in their accounts. Fixing the situation is usually as simple as the cardholder contacting the issuer to explain the situation and approve the charge.

However, the situation arguably becomes more severe when it concerns a claim for someone’s essential property. What if the AI gets it wrong, categorizing a policyholder’s legitimate catastrophe as fraudulent? Someone who faithfully pays their insurance costs, expecting the coverage to give peace of mind after disastrous situations, could find themselves not protected after all. A human-caused blunder during programming may result in the wrong outcome for an AI insurance company customer.

Lemonade lets customers cancel at any time and receive refunds for any remaining paid period on a policy. Once people read its offensive Twitter thread, many publicly indicated wanting to switch providers. It’s too early to tell how many may follow through.

Profiting at Customers’ Expense?

Another part of Lemonade’s tweet string mentioned how the company had a 368% profit loss in the first quarter of 2017. However, by the first quarter of 2021, it was only 71%. The insurance company is not alone in ramping up its AI investment to help profits.

The steps company leaders take in implementing AI impact the results. One study from BDO showed an average of 16% revenue growth while investing more in IT during AI implementation. However, the average increase was just 5% without devoting more resources to IT.

No matter the specific steps a company leader takes when using artificial intelligence, Lemonade’s fiasco sparked understandable worries from the public. One of AI’s main downsides is that algorithms often cannot explain the factors that made them conclude something.

Even the tech professionals who build them cannot confirm the various aspects causing an AI tool to make a certain decision over another. That’s a worrying reality for insurance AI products and all other industries that use artificial intelligence to reach critical decisions. Some AI analysts, in HDSR, understandably advocate against the unnecessary use of black-box models.

Lemonade’s website mentions how AI makes a significant percentage of its claims decisions in seconds. That’s good news if the outcome works in a customer’s favor. However, you can imagine the extra pressure put on an already-stressed insured person if the AI takes less than a minute to deny a valid claim. Lemonade and other AI-driven insurers may not mind if that system helps them profit, but customers will if the company’s technology gives unfair judgments.

Lemonade Backpedals

Lemonade’s representatives quickly deleted its controversial tweet string, swapping it with an apology. The message said that Lemonade AI never automatically denies claims, and it doesn’t evaluate them on characteristics such as a person’s gender or appearance.

Users quickly pointed out that the company’s original tweet mentioned using AI to evaluate nonverbal cues. The situation got even more dubious when a Lemonade blog post claimed the company does not use AI to reject claims based on physical or personal features.

The post discussed how Lemonade uses facial recognition to flag cases where the same person makes claims under multiple identities. However, the initial tweet mentioned nonverbal cues, which seem different from studying a person’s face to authenticate who they are.

Saying something like “Lemonade AI uses facial recognition for identity verification during the claims process” would stop many people from reaching frightening conclusions. The blog also brought up how behavioral research suggests individuals lie less often if watching themselves speak — such as via a phone’s selfie camera. It says the approach allows paying “legitimate claims faster while keeping costs down.” Other insurance companies likely use artificial intelligence differently, though.

A potential concern of any AI insurance tool is that the people who use it may show characteristics under stress that mirror those of untruthful individuals. A policyholder may stammer, speak quickly, repeat themselves, or glance around while creating a claims video. They could show those signs due to great distress — not necessarily dishonesty. The human resources sector also uses AI while conducting interviews. An associated risk is that anyone under pressure often behaves unlike themselves.

AI Usage and Data Breach Potential

AI algorithm performance typically improves as a tool gains access to more information. Lemonade’s original tweets claimed a process of collecting more than 1,600 data points per customer. That sheer amount raises concerns.

First, you might wonder what the algorithm knows and whether it made any incorrect conclusions. Another worry stems from whether Lemonade and other insurance AI companies adequately protect data.

Related: The Best Identity Theft Protection & Monitoring Services

Cybercriminals aim to do the worst damage possible when targeting victims. That often means attempting to infiltrate networks and tools with the most data available. Online perpetrators also know how AI requires lots of information to work well. Similarly, they like stealing data to later sell on the dark web.

In a February 2020 incident, a facial recognition company called Clearview AI suffered a data breach. CPO reports that unauthorized parties accessed its complete client list and information about those entities’ activities. The business had state law enforcement and federal agencies, including the FBI and Department of Homeland Security, among its customers.

Data breaches hurt customers by eroding their trust and putting them at risk for identity theft. Since incidents of stolen or mishandled data happen so frequently, many people may balk at letting an AI insurance tool gather information about them in the background. That’s especially true if a company fails to specify its data protection and cybersecurity policies.

Convenience Coupled With Concern

AI used in the insurance sector has numerous helpful aspects. Many people love typing queries to chatbots and getting near-instant responses rather than spending precious time on the phone to reach an agent.

If an AI insurance claims tool makes the correct conclusions and company representatives keep data protected, obvious benefits exist. However, this overview reminds people that AI is not a foolproof solution, and companies may misuse it to boost profits. As more insurers explore AI, tech analysts and consumers must keep those entities honest and ethical by bringing up their valid hesitations. Doing so will help ensure users’ data is safe against cybercrime.

Source: makeuseof.com

Related posts

Use These Tips to Take Striking Black and White Photos on Your Smartphone

5 Ways Using Linux Improved My Windows Experience

Why Snapseed Is Better Than Lightroom Mobile for Editing Your Photos on the Go