Let’s rethink and expand the Trust Economy

Is a business sustainable if it both forges and violates trust on different levels?
13 June 2019

Trust is fundamental to the sharing economy. Source: Shutterstock

Uber and Airbnb have been cited as examples of the sharing economy as well as the trust economy. TED Talks suggest that trust has become a marketplace force. But we need to look at where that force appears, and where it doesn’t, and what it facilitates.

When I hop in a vehicle from a ride-hailing service, I typically trust the driver. But does the driver trust the company? An Uber driver in L.A. recently told The Guardian, “You can’t tell me a billion-dollar company can’t afford to pay their drivers when all they really need to worry about is marketing and upkeep of the app.”

Trust connects the nodes between consumers and contractors. However, compensation, unionization, and legal issues all undermine trust within the overarching corporate structure and scheme. Is something sustainable if it both forges and violates trust on different levels? Can that be fairly characterized as a “trust economy,” or is that just a continuation of the same power dynamics that have existed for thousands of years, albeit with a high-tech spin?

Trust is rapidly deteriorating on social media platforms. Transportation network companies will transcend their workforce issues by transitioning to a fleet of autonomous vehicles. But if social media platforms fail to restore and build trust, they might not endure in their current versions, or under the same executive leadership, or with the same degree of unregulated liberty.

The entire business model of social media is dependent on trust. Users generate a large amount of content for one another, which is oftentimes intimate, vulnerable, aspirational, entertaining, argumentative, affiliative, and instrumental to self-construction. The content captures attention. The platform monetizes that attention. The attention is especially valuable to advertisers if it is well-analyzed and categorized by the platform. And here again, the user voluntarily participates in the development of a psychographic profile through their engagement with content. The user has created both the content and the mechanism of ad targeting.

That house of cards collapses as soon as the user feels two things: 1) Their trust has been violated. 2) The severity of that violation eclipses the amount of value they receive from the platform.

Number one has already been checked off, due in large part to data breaches and scandals, political interference, and a recurring corporate disregard for the PR implications of activities. Number two has not yet been checked off. User growth and revenue are still increasing. Consumers feel psychologically invested in their social media accounts. They satisfy some need or compulsion by checking their notifications and feeds. And they buy into the illusion of a free service. But there’s a breaking point in any relationship and user loyalty should not be taken for granted. Just ask Myspace.

So, when will we see change?

It’s hard to convince a multinational technology corporation to “do the right thing.” In our murky world, it’s even harder to define “the right thing.” It’s a lot easier to speak to financial interests. The expansion of the trust economy is in the financial interests of any digital company that wants to preserve existing revenue streams, explore new ones, and vanquish competitors.

The conversation around GDPR and privacy measures is already quite familiar to many. But here’s a new goal, a way to proactively address a growing problem, earn back trust, and increase market share:

Create technology that can reliably identify deepfakes, preferably through a browser extension or app.

There is demand. People are increasingly concerned about what is and is not true. They need a tool that informs their sense of trust, and they need to trust the tool itself.

Social media networks algorithmically encourage echo chambers, as part of an effort to maximize engagement. But this promotes lies, distortions, and social toxicity. I’m not particularly interested in a corporate evaluation of “truth” or even in an AI-powered fact checker. This is a minefield, filled with conflicts of interest and biased datasets. But it is technologically possible to identify videos that were doctored, to unambiguously label these videos as fraudulent, and to open those echo chambers again. That is attainable.

An AI browser extension could analyze videos for clues such as an odd or inconsistent frame rate, unusual blinking patterns, lack of blood flow under skin, abnormal or uncharacteristic speech patterns. A deepfake video has the potential to manipulate a consumer, ruin a reputation, sway an election, or even escalate a global conflict. And all those damages could be mitigated by one devoted and continually updated project.

This doesn’t even require regulation. It just requires the will to hold the banner of an expanding trust economy. The company that creates this tool will have a competitive advantage. People will start directing their attention to the most trustworthy digital brands and places. And wherever they go, the money will go.