Insurers are journeying toward AI, but regulatory speed bumps lie in wait
Insurance companies have been slow to embrace technology because of their legacy infrastructure.
However, these organizations understand that failing to digitally transform can have a catastrophic impact on their business in the near future as customers’ expectations increase and new-age digital-only insurtech companies disrupt the market.
As a result, insurance companies have been slowly tearing up their archaic core to make way for seamless platforms that allow for data to flow across the organization. The next step is to embrace artificial intelligence (AI).
Legal specialists at Crowell & Moring suggest that although AI offers exciting opportunities, insurers need to be cautious.
Rushing the development and deployment of AI models could be catastrophic for insurers given the nature of the technology and the regulatory environment prevalent in the US and EU.
“These systems are built through the harvesting of personal information from millions of people and are used to make decisions affecting millions more,” said C&M Partner, Laura Foggan.
“They’re exciting new business tools, but they also pose liability issues under existing laws and regulations. In addition, state and federal officials are considering new laws and regulations that are specific to AI systems.”
What’s the concern with AI in insurance?
AI only works when an insurer has data to feed into the algorithm or model.
While insurers have loads of customer data, there are a lot of gaps that these companies can fill in if they acquire data from non-traditional sources through social media or by participating in digital ecosystems. That data is the basis of C&M’s concerns.
According to Foggan, as a result of accessing such data and by using the technology without paying attention to its continued governance, insurers can get into trouble with the law and even unintentionally harm the community it serves.
To avoid this, Foggan believes that insurers, first and foremost, must ensure they comply with privacy and security laws in the region.
Further, results from AI systems are often difficult to explain — but in the insurance industry, that’s not an excuse. Hence, “[…] when an algorithm manifests as a ‘black box‘, many may feel skeptical about the results.” Foggan, therefore, recommends creating AI systems with a high degree of transparency.
Finally, an important point Foggan makes is with regards to proxy discrimination that might creep into an AI system that an insurer uses.
“Even if they do not recognize protected classes such as race or religion, AI algorithms could seize on “proxy” criteria (such as ZIP codes or even social media habits) that are historically or commonly associated with people in these classes. If the resulting decisions have a disparate impact on protected classes, they could pose a liability risk,” the C&M Partner explained.
The EU and the US are both regulatory minefields
Despite the challenges, insurers cannot avoid using AI. Practicing what they preach, they must measure and take a certain amount of risk, coupled with plenty of precaution, to ensure they’re able to keep up with competitors.
Therefore, C&M Senior Counsel Kelly Tsai believes that insurers should prepare for increased legislation and regulation in the use of data fueling AI in decision making.
In the EU, for example, the General Data Protection Act suggests that individuals should be allowed to opt-out of AI tools that evaluate personal characteristics and come up with a decision that could have a legal impact.
“It also mandates safeguards on such evaluations aimed at preserving due process and reducing discrimination,” reminded Tsai.
In the US, on the other hand, regulations such as the Fair Credit Reporting Act and the Fair Housing Act protect access to personal information. In this specific industry, the state of New York (NY) became the first to issue guidance on the use of external consumer data in underwriting for life insurance.
The circular issued by NY warns that some algorithms and models “may either lack a sufficient rationale or actuarial basis and may also have a strong potential to have a disparate impact” on protected classes.
According to Tsai, the circular also says that insurers “may not use an external data source [or vendor or algorithm] to collect or use information that… they would be prohibited from collecting directly.” Nor could they rely on “the proprietary nature of a third-party vendor’s algorithmic processes to justify the lack of specificity related to an adverse underwriting action”.
Other states are soon expected to follow in New York’s footsteps — and a federal bill is expected to be on the table soon as well.
Tsai and Foggan, both of whom are on the firm’s Insurance/ Reinsurance Group, recognize that there are plenty of laws to take care of for AI — but what really matters is that they focus on development and deployment with an eye on compliance. That’s the way to win in the long term.