Latest News

What’s Your Formula For Digital Trust?

By Steve Prentice

Throughout the decades in which internet-connected technology has been central to our lives, criminals have exploited the very human sentiment of trust to carry out their crimes. But are things in digital security changing? And what does this mean for the relationships you hold with your customers?

 During a fleshed-out discussion of these concepts, we asked, “What is your formula for trust?”. When someone says, “Trust me. I know what I’m doing.” How do you feel? In fact, how do you actually trust someone? Where does trust come from? Is it an emotion, like happiness or worry, or is it more of a rational state based on objective observation? Most people will suggest that it’s both – a hybrid. You can feel trust. You can have trust in someone, but that feeling comes from the observation of a person’s activities. Trust without observation is simply blind faith. But when you take the time to watch someone behave in a consistent manner, the sensation of trust allows itself to grow. Similarly, when those behaviours contradict, trust shatters in an instant.

The Problem With Trust And Institutions

The main area where people think about trust is with other people: their leaders, managers, friends, strangers, and partners. But people also tend to trust technology, often at their peril. To this day, thousands of individuals fall prey daily to scam emails and phone messages, announcing everything from cancellation of a Netflix account through to generic greetings such as “hi, old friend, it’s been a long time.” These are exploited ruthlessly by cybercriminals the world over, hungry for live bait to feed their social engineering creations.

Those who choose to react to a scam message might trust the message senders and earnestly pursue a rescue of their Netflix account. Or they may ask the sender to stop sending these messages, and then trust that they will stop. On an enterprise level, organizations that have been seized by ransomware will trust the threat actors to delete the data they have stolen once the ransom has been paid.

Where does this type of institutional trust come from? For older people who grew up in the pre-internet age, it may well come from decades of life experience: the people who called them on the phone had legitimate reasons to do so. For younger people, those who have literally grown up with digital technology at their fingertips, perhaps the trust comes from sheer existence: “it has always been with me. It’s part of me.”

What Do We Do About Creating More Digital Trust?

Trust is unique as a human state but sadly it has no place in the world of internet-connected business and crime. Trust online needs safeguards. These evolved first as passwords, then two-factor authentication, then multi-factor authentication, and later passkeys and biometrics.

But adding these layers of security on top of activities that busy people want to complete right away has always been a hassle. No one likes being forced to change their password every two weeks, or log back in when they are in a hurry to do something. People shy away from password managers as being too complicated. Captcha-type tests – identifying an object inside a mosaic of photographs – are universally loathed, leading some to believe that Captcha is not so much looking for a person’s ability to identify parts of a motorcycle in grainy photos, but is instead listening for the cry of frustration that only humans and not robots, will utter. That is the true Captcha test.

That’s how it has been, anyway, and that’s what everyone in security has had to deal with – resistance on the part of the very people they are trying to protect. But are things changing now?

Are Consumers Now Knowledgeable?

One of the most interesting pieces of data to emerge from the Thales 2024 Digital Trust Index report is increased acceptance of security controls among users themselves. Consumers appear to be more knowledgeable about them, and are actually expecting them, to the tune of 81% of users now expecting brands to offer MFA.

As Haider Iqbal, specialist in Product Marketing for Thales Identity and Access Management says, “our end consumers are literally doing the job for us – they are actually telling us that this is something that they now expect.”

But James Leaton Gray, Director at The Privacy Practice, a firm that specializes in privacy and data protection, is not so sure. He suggests that people’s expectations for MFA and similar controls will always bump up against a permanent desire for convenience and speed, a belief that is certainly borne out by statistics that reveal the speed by which shoppers will abandon a shopping cart or chatbot if there is a delay of mere seconds.

All of this leads to an obligation on the part of any provider of goods or services to determine the optimum formula for establishing trust with customers: identifying the types of technological safeguards that are simultaneously palatable to end users in terms of comfort, ease of use, and oh yes, actual security, while also keeping pace with the relentless creativity of cybercriminal gangs to beat the system at every turn.

A Palatable Solution That Bolsters Your Brand

Haider suggests building this into a company’s brand using concepts such as privacy by design – a method by which the activities of establishing security and trust are seen not so much as a tedious and obligatory process, but instead as something that bolsters the brand overall, in many cases by removing annoying interstitial pages and hurdles and instead building friction-free security earlier into the process. This has an interesting similarity to shift-left, the idea of building quality control into a software development lifecycle – something that moved the IT industry from a sequential waterfall-style perspective into something more agile, in response to similar demands for faster, more reliable code being put into production. The refrain is the same, for code as it is for consumer security: to make it easier and more reliable, let’s build it into the process earlier.

Perhaps also, customers will be more willing to part with only selective amounts of data. There may not be a need to include a person’s street address or birth date on a signup form as was done in days of old. If these pieces of data are not pertinent to a transaction, then they serve only as fodder for data thieves. James and Haider both see this as a technique called “progressive profiling,” in which a data set is built up over time as needed, rather than demanding a full set of Personally identifiable information (PII) at the outset.

Ultimately, this becomes something that individual companies must assess in terms of what is truly needed to ensure their customers remain both secure and loyal. No matter what vertical or industry you are in, you can still learn from others in completely different markets – since they too are seeking the same ultimate goals of ongoing positive trust.

Steve is a specialist in organizational psychology, focusing on the interaction of people, technology and change. He works as a speaker, author, broadcaster and writer, with clients in IT, cybersecurity, government, healthcare, and law, dealing with cybersecurity, AI, blockchain and the future of work.

Steve is the author of three business books and is a ghostwriter for experts worldwide. He is a visiting lecturer at Ontario Tech University and delivers keynotes, media interviews, white papers, and podcasts on these topics.

He holds degrees in journalism and psychology, and is pursuing a PhD in Psychology, focusing on brain/technology interaction.

 

 

Comments
To Top

Pin It on Pinterest

Share This