Trust enablement: AI risk alerts can deter fraudsters
Getting your Trinity Audio player ready...
Without trust, business falls apart. Companies trust that they’ll be paid for their work, and clients expect to receive goods and services that meet expectations. Firms want to hire trustworthy employees and have customers who play by the rules. Trust enablement goes hand-in-hand with company success and is essential to other types of organizations too.
However, while managers may prefer to look people in the eye to make a call on whom to trust and whom to avoid, fast-paced digital life has other ideas. Plus, even if you could sit down and quiz everyone in a large organization – for example, to protect against industrial espionage – it’s not a trust enablement strategy that scales well.
The optimum approach for businesses is to save their energy for suspicious cases rather than placing everyone under scrutiny. But the challenge – up until recently – has been how to do this reliably. And the answer could lie in voice analysis, although perhaps not using the technology that you may be picturing.
Voice stress analysis, which has been around since the 1960s, might look good in the movies, but its success is mixed. The approach still has critics to this day and remains time-consuming, which brings us to a much faster and more accurate alternative that leverages the analytical capabilities of AI.
Clearspeed, a developer of modern AI-based risk alerting solutions, has been busy building up its client base within the insurance industry and entering sports integrity sectors – to give a couple of examples of areas of interest. Naturally, users are keen to know how the automated trust enablement tool works.
On TechHQ, we’ve written about how voiceprints are helping banks to combat financial fraud. Voiceprints may take into account hundreds of different signals, which can include not just raw audio, but also related authentication clues. These can include whether the microphone compression is familiar, and the cadence at which callers tap inputs on their smartphone screens. However, Clearspeed’s approach is different again.
What’s different about Clearspeed’s trust enablement tech
“The specific features evaluated by RRA [Remote Risk Assessment] and the methodology used to analyze these features are a Trade Secret,” writes Clearspeed. However, there are a few clues on how its RRA technology (now branded as Clearspeed) – which listens to how callers respond to a series of questions that can be in any language – works.
Key to the algorithm’s success is the fact that cranial nerves linked to muscles, which control the key aspects of speech, have the potential to betray whether answers are suspicious. “RRA measures macro-level stylistic vocal changes elicited in the voice production process through voluntary and involuntary neurological pathways,” explained the firm in some of its early documentation.
If a subject responds with an answer that they believe to be true, no alert is raised. However, vocalizing a knowingly false response raises an alarm (in the vast majority of cases) – based on brain activity that’s detectable in tiny chunks of speech.
Again, Clearspeed is keeping its cards close to its chest, but it’s feasible that rendering those speech samples as images – or, more correctly, spectrographs – enables AI to find tell-tale patterns in the output.
Trust enablement at scale
Practically, the approach means that users can dramatically narrow down the number of suspicious cases that need to be further investigated – saving resources. Practically, this means that insurance claims can be processed much faster – for the majority of claimants – which improves renewal rates.
Also, anecdotally, when faced with a quickfire set of automated questions, some fraudulent claimants choose to withdraw their claim. This points to AI-based trust enablement systems as becoming a deterrent, as word spreads on their effectiveness.
In the sports world, Clearspeed’s technology has been used to screen the pool of officials selected to judge international boxing competitions, according to information published by McLaren Global Sporting Solutions [PDF].
Other success stories include identifying insider threats within a large, 4000-person organization. Here, business data – for example, looking at who had access to what – helped narrow down the list of suspects to around 400. And from that pool, AI-based risk alerts flagged 21 individuals as suspicious.
At this point, two employees confessed to fraud, which eventually led to 19 prosecutions.
28 February 2024
28 February 2024
27 February 2024