CX leaders are now responsible for the data security of their AI solutions – but are they ready for it?

21 February 2024 | 15 Shares

Source: Zendesk

Why are CX leaders responsible for AI security?

Historically, the responsibility for AI safety has typically been with IT leaders and tech specialists until a notable shift recently. Behind the change are several factors:

  • AI has evolved from simple process automation to a central role in customer experience (CX).
  • Customer service chatbots and hyper-personalised product recommendations are now the norm rather than a differentiator.
  • Businesses host huge and growing amounts of sensitive customer data.

As a result of the above, the onus of protecting data falls on the personnel who decide which AI-enhanced software, apps and features are integrated into the company’s CX offering.

Customer experience

Source: Zendesk

The more data used to train the algorithms that facilitate a personalised CX, the greater the risk of a serious data breach. Such a breach could lead to reputational damage, costly non-compliance penalties and intensive data recovery processes. These impacts will disproportionately hurt smaller businesses, as they are under more pressure to reach the high bar of expectations for personalised CX yet do not necessarily have the resources to implement robust cyber security measures. Data security is, therefore, at the forefront of all CX leaders’ minds in large and small organisations. Indeed, Zendesk’s annual Customer Experience Trends report reveals that 81 per cent say data protection and cyber security are critical facets of their customer service strategies.

The consequences of data misuse

The CX leaders are the new drivers of customer data privacy in the business and will remain so as AI is augmented into more of their space. Naturally, there could be devastating financial consequences if the data is not handled carefully, but this is not the only reason it should be taken seriously. Customers are becoming increasingly aware of the value of their data and growing cyber risks, with 57 per cent feeling they are constantly under threat of being scammed. They demand reassurance from the businesses they interact with that their information will be safe, secure and not unnecessarily distributed to third parties.

AI data won’t bite if handled with care

While it is wise to be cautious – dare we mention the DPD chatbot swearing incident? – customer-facing businesses should not be discouraged from integrating the latest AI-driven solutions. According to the Zendesk report, 68 per cent of CX leaders believe generative AI chatbots can help build a stronger emotional connection with clients. The technology is also a game changer for service agents who remain essential to the customer support landscape. Many consumers are still looking to ‘talk to a human’, and agents prove especially important when chatbots are asked something outside their remit. AI can provide access to real-time information about customer preferences, history and issues, enabling agents to offer more personalised and empathetic support. By allowing them to deal with more queries faster, staff can more easily reach targets and focus their energy on more interesting strategic work, ultimately improving the employee experience.

Customer experience

Source: Zendesk

Furthermore, algorithms trained on customer data can better inform business decision-making. By analysing patterns, preferences and trends derived from the wealth of customer interactions, AI can provide valuable insights for strategic planning and product development. This data-driven approach enhances operational efficiency and fosters a deeper understanding of customer behaviour and market dynamics. Businesses can adapt their strategies quickly, ensuring agility and fast responses to a competitive landscape.

Consider products with built-in transparency and data security

Transparency is imperative when it comes to AI safety and implementation. However, not all CX leaders can easily provide this assurance, particularly if they have limited IT and cyber security knowledge. The simplest solution is, therefore, to invest in products with built-in transparency and data security. A leading example is Zendesk, a customer service and engagement platform that leverages advanced AI technologies, including proprietary machine learning models and generative AI supported by OpenAI. Zendesk AI is built on billions of real customer service interactions to help users write support articles from scratch, change the tone of a response and deploy bots that sound like people.

The Zendesk approach

Zendesk understands the necessity of offering data transparency and security alongside its AI features. The company provides visibility into its processes through its Trust Centre, Regional Data Hosting Policy, and Service Data Deletion Policy, enabling CX leaders to understand easily how user data is utilised. The exclusion of identifiable information from training datasets and the application of natural language processing algorithms to remove sensitive details keep customer data safe from unintended disclosure. Furthermore, the conversion of data into machine-readable formats using tokenisation ensures that even in the event of unauthorised access, the data remains indecipherable without the appropriate tools. By adopting such measures, Zendesk meets regulatory requirements and instils confidence in its users.

Zendesk’s Customer Experience Trends 2024 report surveyed 2,818 consumers and 4,441 CX personnel about AI and intelligent experiences, trustworthy data experiences, and next-gen immersive experiences. To discover more insight into the future of CX, access the full report for free here.