Who’s listening-in on your business meetings?
Microsoft’s admission that it, like Amazon and Google, uses human intervention in educating its “smart” AI-based voice services may make businesses think twice about using voice assistant systems in the workplace.
Google, Apple, Amazon, Samsung, and Microsoft are the names behind the consumer-oriented devices that are starting to make the transition into the workplace, but several firms have admitted to using massed ranks of human labor to help educate the AI “brains” behind the smart assistants.
The vision of quietly humming machine-learning arrays, ensuring that Siri, Cortana, et al. get more responsive and generally more helpful and incisive appears not to be quite the case— it should be noted that Google and Apple have quietly ditched their AI’s human helpers.
Instead, outside contractors employed by the data giants supply ranks of (usually poorly paid) humans that manually transcribe voice interactions from all over the world. The fact that individual people are therefore party to the voice assistants’ users’ most intimate secrets should come as a wake-up call for business users keen to protect intellectual property.
Voice assistants & data security
The latest revelations about the digital services’ back-office methods come from website Motherboard, staff from which received leaked documents that show human contractors are required transcribe the contents of Skype calls to improve the automated interpretation of human speech, in particular, for the simultaneous translation service offered.
According to the latest admissions, an individual human operator is given an audio excerpt and several machine-generated translations, then choosing the most accurate or providing a better version if necessary. This type of tuning and learning has been portrayed as being undertaken entirely by intelligent algorithms. The truth is rather more prosaic, we are learning.
The contractor behind Microsoft’s Cortana assistant gives its staff guidelines regarding data classification and punctuation styles and expects them to process around 200 pieces of data per hour. The human power behind the AI throne gets paid up to US$15 an hour, including bonuses.
A Microsoft statement to Motherboard reads: “Our disclosures have been clear that we use customer content from Cortana and Skype Translator to improve these products, we engage third party expertise to assist in this process, and we take steps to de-identify this content to protect people’s privacy.”
YOU MIGHT LIKE
Cybersecurity and economic uncertainty
De-identifying or anonymizing consumer data will, theoretically, make identifying individuals by the contents of conversational snippets fed through voice assistants relatively difficult. But it can easily be imagined that the nature of conversations held in, for example, company boardrooms will make identification of companies and organizations much easier.
After all, conversations about “the new features of the XPS 13 that Production is working on” are obviously from Dell or its partners. Key product details, mentions of tactics to be used to fox competitors, and open discussion of intellectual property might be literally overheard.
With competition in the voice assistants space intense— and looking to fire up in the business market— the obvious suppliers (Amazon, Microsoft, et al.) will not be proving particularly attractive to discerning CTOs.
Amazon’s Alexa for Business, for example, seeks to provide businesses the tools to set up their own apps, with company-tailored dialogues (“Who’s in charge of social media?”, ‘How do I submit an expense report?’) becoming an “innovative and engaging part of your corporate communications”.
Perhaps it’s time for organizations interested in voice recognition systems to look beyond the usual suspects to find better attitudes to data privacy?