Skip to Content

It is time to close the digital trust gap

August  29, 2023 l  By Bhaskar Chakravorti

Now that the effects of the 2020 “digital shock” — the involuntary accelerated embrace of digital technology — are receding, it is time to turn attention to what might contribute to an organic continuation of that embrace: digital trust. The moment seems right: there has been a substantial recent decline in trust in digital platforms while the industry experiences a post peak-pandemic slowdown and must win back users. The trust gap is an untapped business opportunity. With digital assets increasingly crucial to our welfare, not unlike health and financial assets, why can’t digital trust be “productized” and be a professional industry as healthcare or financial services are today?
 
Currently, the solutions are ad hoc and, in many cases, the solution is in the hands of those who might have created a problem in the first place. Regulators have made different degrees of progress. Digital platforms and technology dependent companies have taken steps towards trust-building, but they fall short of the need. There are numerous piecemeal technology fixes. What’s missing is a holistic view of digital trust that considers its many facets and service providers who act in the interest of the user to help solve for their digital trust needs — analogous to the general practitioner physician who knows the whole individual and directs them to the appropriate specialists and technological interventions.
 
What is keeping the natural market forces from coming together with solutions? Such an industry will need clarity on three basic questions regarding the problem and a fourth question regarding the solution:
 
What is digital trust?
Trust, digital or otherwise, is hard to define, but we recognize it when it’s missing. The most cogent framing of the idea is that of Nobel laureate and economist, Kenneth Arrow, who spoke of trust as a
“lubricant” in a social system. In the context of digital interactions, we can think about such lubricants as mechanisms that reduce unproductive friction and help users build confidence in their digital interactions. It is important to separate this from productive friction that can help build trust, such as, say, passwords or consent to cookie policies or two-factor authentication routines designed to protect the user and help build trust. There are other forms of productive friction as well: examples include audio (artificial shutter sound effect on a phone camera) or visual friction (image of a file being transferred) that are built into platforms to give users familiar cues to reassure them that their digital request is being processed.
 
Does digital trust add value?
Clearly, users and adopters of new technology are making tradeoffs across need, convenience and distrust. Some may argue that people will prioritize the convenience and benefits of digital systems and overlook their lack of trust. Indeed, our own analyses of digital trust, which we have written about earlier in the HBR, showed that what people say about trust and how they behave are different. Moreover, as the digital economy’s presence grows, the user’s agency diminishes. For one, the devices, the software and the platforms control the mechanics of the interactions and the data generated.  Besides, many digital interactions have an addictive component to them —  by design. On top of that, every individual user loses bargaining power as their options shrink. For all these reasons, it could be argued that users will engage even if they lack trust in the digital systems.
 
Nevertheless, studies repeatedly show that trust plays a key role in digital behavior and its absence is locking in unrealized potential in existing and forthcoming applications: for example, greater trust in Facebook is strongly correlated with intensity of Facebook use; an analysis of why use of AI in healthcare is lagging, reveals that lack of trust in algorithms is critical; lack of trust has been a major hurdle to adoption of self-driving cars.
 
Mitigating the tradeoffs could release substantial value: for one measure, consider that a 5% increase in digital trust according to an index created by Callsign, the identity solutions company, was shown to translate to an economy-wide bump of up to $3,000 of GDP per capita, on average.  More broadly, trust yields long-term competitive advantage: companies considered trustworthy outperform those that are not by a factor of 2.5X, according to a Deloitte study.
 
What causes the digital trust gap?
Trust flows not from a single source but from many. There are six ways in which most unmet digital trust needs arise:
  • Identity: The user’s identity must be authenticated every time they are granted access to a digital system or there is a transaction in their name; otherwise, there is a high risk of fraudulent transactions along with problems associated with identity theft. Consumers in the U.S. lost $5.8 billion to fraud in 2021, an increase of more than 70% the year before, according to the FTC.
  • Security: Digital ecosystems are increasingly stores of sensitive data which must be kept safe from breaches and cyber-attacks. With the spread of new technologies and new forms of working and living with digital systems, the attack surface is growing. 53% of users said they would use a company only after making sure of its reputation for keeping data safe, according to a McKinsey survey.
  • Privacy: There is a need for users’ data to be deployed only in the users’ interests and not shared without the their permission. 46% of users said they would switch to another brand if they were unsure of how their data will be used, according to the same McKinsey survey.
  • Authenticity: Users turn to digital systems for news to ratings and recommendations to information on all kinds of topics, which ought to meet thresholds of quality and authenticity. At the very minimum, the content should not be harmful to the user or to institutions – e.g. democratic systems, law and order, etc. — that preserve the user’s interests and societal welfare. Here too, the attack surface is growing. As for news alone, 40% of users globally are avoiding mainstream news and turning to digital platforms, up from 29% in 2017, and in the U.S. only 26% trust the news generally, according to a report from Oxford’s Reuters Institute. Misinformation on these platforms is growing and the cost of misinformation globally was estimated at $78 billion annually by a University of Baltimore study.
  • Reliability: Increasingly, digital systems are making choices on our behalf and even taking actions that displace our own– primarily through automation, decision-making tools and software and algorithms. Some of these systems must have zero margin for error as the consequences could be quite dire. Self-driving cars are a good example; nearly half of U.S. drivers said they would feel less safe sharing the roads with self-driving cars, according to a AAA and Harvard study. 
  • Transparency: Digital systems can handle complexity in ways humans cannot, but there is a cost to tackling complexity as the systems become a black box. Confidence is built when decisions and the systems are transparent and their outcomes are considered fair, unbiased and adhere to ethical principles. Moreover, given the growing interest in sustainability and concerns for ESG practices, transparency on how a firm’s digital activities have an impact on these measures will increasingly begin to matter. Three-quarters of users will use a brand if they sense it is committed to transparency, according to an Accenture study.

While there is no silver bullet solution that can take on all of these, a company that has authentically addressed and explored solutions to these issues can lay claim to a powerful source of competitive advantage. Given the multi-part nature of the trust gap, the solutions are hard to find, which means that there is hope for sustainability of the advantage if a company can bring a holistic solution together. Which brings us to the next question:

What are the components of a digital trust solution?
The different components to assembling a solution include — deploying technology; reconfiguring business operations and modifying user behaviors; setting standards, policies and regulations; turning to professionals who can bring the components together to create a digital trust product that serves the need of the user. Consider these below.

Deploying Technologies
There are many technological options to turn to depending on the area of need. For example, to address the identity issue, there are several authentication technologies, from digital ID systems to biometrics or token and password-based authentication. Even within any of these, there are ongoing innovations. Biometrics is particularly rich with potential: from allowing access to devices and financial transactions to operating cars; you can make payments with a smile or a wave, while grab-and-go stores dispense with check-out lines altogether combining biometrics with sensors, computer vision and deep learning.

Other, versatile technologies cut across many needs. AI can interrogate vast amounts of data, validating it and finding gaps, fakes and inconsistencies. Quantum computing, still early in its development, can run such analytics even faster. The blockchain can maintain trustworthy records of transactions while its use of distributed ledgers limits the impact of a single attack.

Some needs can benefit from multiple technologies directed towards them. Behavioral analytics can help with identifying the primary sources of cyber vulnerability, while homomorphic encryption allows users to work with encrypted data without first decrypting it.

Technology, however, can cut both ways—by building digital trust, but also by potentially undermining it. AI can be used for cyber-attacks at scale or producing deepfake videos. Biometrics and other ID systems run the risk of being used for surveillance with potential for grave mistakes and privacy violations. When automated systems fail some %age of the time, the outcomes could be disastrous. Blockchain based systems can be used for illegal activities or for speculative. To make it a reliable ally, technology needs to be paired with other components of the trust toolbox.

Reconfiguring Business Operations and Modifying User Behaviors
Changes in the ways in which businesses operate and users behave are key to closing the digital trust gap. Of course, technologies can enable these changes. For example, data analytics can trace usage patterns or access requests and flag anomalies. They can also be used to gauge whether software is being updated on users’ computers and spot vulnerabilities. These can be used to send reminders and nudges and cyber hygiene recommendations, e.g. educating users to recognize suspicious communications, setting strong passwords, installing antivirus software, and network firewalls, where feasible. Users also need to be educated in checking sources for news stories or social media posts and contextualizing information. With employees working from home, there will be greater stress on corporate cybersecurity controls. The need for users to modify their behaviors and cyber hygiene routines is even greater.

For their part, users can play an important role in advocating for their rights and shaping public policy. They can call out violations of privacy and data misuse, spot and resist the spread of harmful content or play a part in calling out instances of corporate and state overreach. The constant engagement of users push both policymakers and industry decision makers towards formulating standards, policies and regulations that are essential to close the trust gap.

Standards, Policies and Regulations
Technological and behavioral interventions need broader frameworks within which to operate. Industry standards set the tone and harmonize expectations. Some regulators have taken the lead and have created a foundation upon which other policies can be built; for example, the General Data Protection Regulation from the EU, especially Article 5(1)(a), which states that personal data must be “processed lawfully, fairly and in a transparent manner in relation to the data subject,” makes a clear connection between law and transparency and fairness. More policies are in development. For example, there the ITI Policy Principles for Enabling Transparency of AI Systems regulations as well as the recently released Blueprint for an AI Bill of Rights, from the White House to “guide the design, development, and deployment of artificial intelligence (AI) and other automated systems so that they protect the rights of the American public.”

More generally, the deployment of standards, policies and regulations will need coordination between government and industry. We may even find certain countries and regions taking the lead on this. Currently, the frameworks are inconsistent and in different stages of development depending on where on is in the world. While the EU has been a leader, the US is beginning to enact laws often on a state-by-state basis, many of which are likely to be decided in courts. China has created its own walled garden with tight controls over the digital sector. Other major countries, such as India, have instituted rules for content moderation and its approach to data protection and user privacy is still incomplete.

Some countries can act as role models or hubs for digital trust, or key aspects of it. Estonia, for example, a leader in digital public services, or New Zealand, which aims to be a leader in digital public services and fostering a startup economy, have already declared a willingness to take on this mantle; they were founding members of a leading digital governments club, which intends to share best practices with others. Singapore, which operates a trusted digital environment, is eager to be a nodal player in digital trust. It is already a hub for financial services and Asia-Pacific business, shipping and aviation and is positioning to be a leader in this space as well; it recently launched a digital center for excellence to study and establish best practices in digital trust.

Digital Trust Fiduciaries
As we consider the various components that must be integrated to help close the digital trust gap and the varied needs of different users — whether they are individuals or enterprises — there is a need for service professionals who can understand user needs and assemble an integrative customized solution. These “digital trust fiduciaries” don’t exist as yet, but are sorely needed and there is a market for them. Their job will be to offer digital trust as a product or service, designed around the user’s needs. This cadre of professionals would, ideally, be certified and regulated, much like doctors and financial advisors. This industry would be separate from the de facto guarantors of digital trust today, i.e. the digital industry comprising platforms, fintechs, e-commerce and other tech companies with whom the users interact for their digital needs. This professional class can grow from within the current IT, cybersecurity and management consulting industry or they arise as a greenfield development. The type of professional is likely to vary depending on the type of user — individual users or an enterprise — and the kind of enterprise, large or small who is the client for the service.

There is no doubt there is a burgeoning market associated with closing the trust gap. A recent analysis from SGTech, Singapore’s technology industry association, estimates that the digital trust market potential to hit $537 billion by 2027. That said, our original source of inspiration, Ken Arrow, also observed that “Unfortunately this (trust) is not a commodity which can be bought very easily. If you have to buy it, you already have some doubts about what you have bought.” While this new industry’s time has come, but it must proceed very carefully to make sure that we build trust in digital trust solutions. That will not be easy, but with the industry in a lull, the time is right, which is why the work must begin now.