UX ethics: keeping experience design human

By Jamie Stantonian

In our rush to capture users’ attentions and custom, Jamie Stantonian asks if we’re neglecting our ethical responsibility to put their interest and voices first.

In the past few decades the user experience profession has taken on an increasingly important role in business, evolving from a ‘nice to have’ to being recognised as an essential competitive advantage. 

Today, in training our clients to improve their UX maturity, we’re helping them metamorphosise into their final form: the User-driven Corporation. That’s Jacob Nielsen’s term to describe a business that’s absolutely centred on user research to drive its overall direction.

This means looking to understand real problems people face and using these insights to create products and services that address them – perhaps helping people’s lives in the process. 

But if we step back and reflect on what this looks like to those less invested in the outcome, we might see all this excitement quite differently. Because, to an outsider, the idea of vast corporate entities using applied behavioural science to sculpt our habits and surreptitiously guide our decision-making sounds like something from a William Gibson novel. 

As smartphones, smart homes, and other interactive technologies become ever more central to the human experience, we bear an enormous responsibility to get this right. But there’s a danger that when our enthusiasm around technical possibilities combines with our unending quest to prove the ROI of experience design, we end up exploiting rather than advocating for users.

Already we’re starting to earn a somewhat sordid reputation as mesmerists whose job is addicting people to apps and swindling them into subscription services. While a magician guides people’s attention and influences their behaviour for entertainment, we do so to meet the KPIs of our clients, or for the sake of adding another case study to our portfolios. 

We talk about ethics a lot, but questionable design decisions are stubbornly widespread. Recent research by Princeton University about the prevalence of dark patterns found them present in 11.2% of the 11,000 English-language ecommerce sites they studied. 

One could cheerfully say that almost 90% of the sites did not employ manipulative designs, but we’re still looking at more than a thousand sites that did (in this sample alone) and the most popular ones at that. What’s more, the Princeton figure represents a lower-bound estimation due to methodological limitations. 

While much of the present generation of psychological tricks employed in dark patterns (such as the scarcity bias or the sunk cost fallacy) remain relatively unknown outside the industry, that is starting to change. Just as the basic formulas of clickbait became stale once people figured out the ruse, so too are they starting to see through our illusions. 

While they might not know the fancy terminology we employ, those of us who’ve facilitated enough user research have probably witnessed people who sigh and shake their heads at the sight of a countdown timer pressuring them into making an impulse purchase, or curse under their breath at a confirm-shaming cancel button that says ‘No thanks – I’d rather pay full price’. 

These are the sounds of yesterday's neuro-hacks becoming tomorrow's UI cliches. And while many of these may still be effective for now in brute economic terms, in the long term there may be a fallout from brands being associated with what may come to be seen as tricks designed to circumvent people’s conscious decision making. 

American legal scholar and behavioural economist Cass Sunstein sees the question of what is acceptable in terms of user influence as a tension between autonomy and welfare. When a user grumbles about feeling unduly influenced by a manipulative design it’s because their autonomy is being limited. 

Welfare on the other hand is when we use psychological methods to design for outcomes that we believe to be in the user’s own best interest. This is the nub of where things start to go wrong. Because, in determining what’s best for the user, we can easily fall prey to our own, self-motivated reasoning. 

Often we find that a business wants to change the choice architecture of their sites to reduce inbound phone calls, because call centres are expensive to maintain. This includes either burying the phone number somewhere hard to find, or other visual tricks to make alternatives like chatbots and FAQs more prevalent. 

This downgrading is almost never in the user's interest, although we can convince ourselves otherwise ('they’ll get a quicker response by self-serving!’). And because speech evolved about 70,000 years before the FAQ, there will always be people who just prefer having a chat and feel infuriated at such transparent obstacles and trickery. 

We may hit the KPIs the organisation has set, but unless information is channelled upwards about these frustrations, decision-makers may never know the upset and annoyance it may have caused. 

Perhaps the most powerful weapon in our arsenal is the most controversial. In 2017, former Facebook president Shawn Parker admitted that a key design concern was ‘how do we consume as much of your time and conscious attention as possible?’ and did so by ‘exploiting a vulnerability in human psychology’ by giving users ‘a little dopamine hit’. 

Parker was referring to psychological methods of social validation and variable rewards designed to produce compulsive behaviour (in the language of the slot-machine industry, extend ‘time-on-device’).

It was a method later refined and popularised in Nir Eyal’s popular book Hooked: How to Build Habit Forming Products where he argues that only by cultivating these compulsive feedback loops can companies attain ‘Customer Lifetime Value’ (CLTV).

He argues that, because of the intensity of completion in today’s attention economy, ‘hooking’ customers as soon as possible and rewarding them with dopamine-producing content is the only way to achieve any form of success. 

Eyal’s handbook caveats that these techniques should be used to install only good habits, to ‘enhance lives’ and ‘increase productivity’ and so on. But out in the wild there is no guarantee this will be the case, and as the methods quickly settle into the comfortable slippers of best practice, they seem to be employed at any opportunity, ethical or otherwise. 

These are product-level decisions, often far removed from those at the Kanban coal-face who are employed to build it, and who could simply refuse to do so if they feel some moral red line has been breached. But really they shouldn't be put in such an uncomfortable position to begin with. 

If building a User-Centred Corporation means anything, it means telling HiPPOs what they don’t want to hear. If our vision of them being forces for good is to manifest itself, it means us having existential conversations at a board level about the type of companies they want to create, and the type of future we want to inhabit. 

Herbert Simon did not coin the phrase the phrase ‘attention economy’ to describe the social media of the future, but to articulate problems faced by organisations in an information-rich environment, knowing on what basis to make decisions – all of which ultimately impact human lives.

In today’s dashboard-decorated boardrooms, where decisions are made on the basis of numerical abstractions, our role must be to amplify the voices to the people lost beneath the abyss of information – to focus attention on the human lives affected by these decisions.

And in so doing, to help build a world where people are heard, not just herded.

A version of this article first appeared in issue 323 of net Magazine.