Are we making a scapegoat out of FaceApp?

Mismanagement of data is commonplace today— are we really surprised FaceApp is any different?
30 July 2019

Quickly shunned in the face of privacy fears? Source: Shutterstock

In recent weeks, photos of people artificially aged by an app have flooded social media. The app in question is called FaceApp, and though it first came into being in 2017, there has been a recent surge in enthusiasm for it, and specifically its age filter, reflected in the hashtag #AgeChallenge.

It wasn’t long, however, until developers began to ask questions about its data policies. Joshua Nozzi first sounded the alarm when he said the app uploaded all the photos on the user’s phone to Twitter. Others found this not to be the case, but they did find that single photos uploaded to the app for the purpose of applying a filter were stored on a server. Its Chief Executive told The Verge that these were deleted ‘not long after’.

That didn’t alleviate the concerns of those who have already used the app and who now fear their face is on a server in Russia—a country now synonymous with ethically ambiguous digital practices. Facial-recognition technology is developing rapidly, and so powerful it may be that even Microsoft has called for regulation. Now a minor hysteria has been generated. ‘Is FaceApp safe?’ asks TechRadar.

James Whatley, a strategy partner at the experience marketing agency Digitas UK, pointed out to WIRED that the app has no ‘opt-out’ option, but nonetheless says that “if we’re being brutally honest, the terms of use regarding FaceApp are no different to any of the multiple social media platforms billions of people use every day.” So why the fuss? Why this app, when there are plenty of companies who mismanage or abuse our private information?

First of all, there is something especially personal about a close-up portrait photo. And the connection to Russia probably doesn’t help. But could it also be the case that, in full knowledge of how more familiar tech companies chronically mishandle our data, and yet unable to tear ourselves away from them, FaceApp has become a kind of scapegoat?

The privacy paradox—about which I’ve written before—is that we value our privacy but fail to abandon platforms that abuse it. I put this down to the perceived primacy of convenience, as well as the idea that, since the offending companies have our information already, we have little to lose by continuing to use their products. In the shape of FaceApp, we have a convenient object at which to direct our collective frustration: it’s relatively new, it doesn’t affect our lives and, to top it all off, it’s Russian.

If the opprobrium leveled at FaceApp and its makers does reflect the inconvenient truth that the worst perpetrators of data abuse are much closer to home, and yet so enmeshed in our lives that we feel we can’t live without them, then it shows just how frustrated we’ve collectively become.

But it might be more productive to use the energy wasted whipping up a frenzy over an app that we will soon forget about to make plain our dissatisfaction with the mega-companies who take a cavalier attitude to data while raking in record profits, and try to arrive at a long-term solution.