New york, USA - december 17, 2019: Youtube ads service  on laptop screen close up view

Panic rippled through the marketing world a few years ago when brands discovered that they were paying to have ads promoted side-by-side with extremist propaganda.

Executives from platforms such as YouTube were hauled in front of MPs in 2017 to answer questions about why videos encouraging hate crimes were promoted on their sites, and advertisers scrambled to find out what content their ads were being placed against.

Examples included recruitment videos for banned jihadi and neo-Nazi groups which had remained on sites long after the content had been reported, according to the parliamentary investigation.

Ben McOwen Wilson, managing director of YouTube in the UK, says the company has done a lot of work since then to strike a balance between “providing a range of voices and content historically excluded from traditional media” and the “potential harms that misuse of massive reach can cause”.

However he concedes: “We recognise that we have not always found ourselves on the right side of this balance.”

Problems do persist. Big advertisers such as Nestlé last year pulled spending from YouTube in the US, amid growing concerns that the platform had not done enough to prevent inappropriate comments being posted on videos of young children. The Google-owned company and Facebook were also criticised in March for failing to remove videos of the Christchurch gun attack in New Zealand quickly enough.

They are not alone. As well as staying on the right side of politicians and the law when it comes to editorial content, a new wave of digital publishers, including Big Tech platforms, face the same perennial tricky task as traditional media owners. How can they best avoid alienating advertisers and their consumers when ads appear against inappropriate, if strictly legal, material?

The proliferation of online advertising opportunities and complicated, circuitous ways of selling inventories of page views has led to numerous blunders. These include the placing of adverts for airlines against news reports of fatal air accidents, wholesome consumer brands being promoted through high-traffic porn aggregator sites, and adverts for McDonald’s takeaways appearing against online stories on the US obesity epidemic.

So-called fake news is also a hazard. In 2019, brands unknowingly spent an estimated $235m to place their ads on global sites known to share disinformation or conspiracy theories, according to accuracy rating organisation, the Global Disinformation Index.

But many in the industry agree that the danger posed to brands by such mismatches has receded. Integral Ad Science, which aims to help clients avoid risky ad placement on digital platforms, in January found that brand safety has fallen off the list of top concerns for media professionals, with only a third naming it as a challenge. Topping the list instead was the challenge of targeting the right audience.

John Montgomery, global vice-president of brand safety at WPP-owned GroupM, says the reduced concern should be seen in the context of how resource had been put into curbing the reputational risk of ad placement online over the past few years.

“We understand the risks now and we have mitigated many,” he says.

Potential hazards in the digital supply chain do not, however, stop at landing next to harmful or off-putting imagery. Digital advertising is also regularly subject to fraud, whether through the use of dishonest metrics on impressions and click-through rates or the illegal collection of consumer data.

To alleviate risks, GroupM encourages clients to follow a few basic guidelines on best practice. One is to review the data collection practices of all advertising partners, another is to buy ad space only from publishers that use proper authentication of viewings and click-throughs.

The downside of a more cautious approach is the cost. Using a handpicked list of pre-checked publishers makes it more difficult, and thus expensive, to reach the desired number and demographic categories of people.

Excluding certain websites, such as porn aggregators or publishers that have been linked to hate speech, is not hard to do. But filtering articles and images on mainstream sites can be more difficult.

Tim Elkington, chief digital officer at the advertising trade body IAB UK, says blocking articles that contain the word “sex” can for example excludestories about the duchess of Sussex. Brands that scramble to avoid having their ads displayed next to articles about shootings or other highly-publicised disasters end up ruling out large numbers of stories that mention cities with any kind of connection with the events.

Technology is, however, improving. “Advances in natural language processing are making it possible to assess the context of an article, recognising for example that an article about cosmetics talking about bath bombs is safe,” says Mr Elkington.

Lisa Utzschneider, chief executive of Integral Ad Science, says she works with companies such as Adidas and Chanel to facilitate so-called contextual targeting, meaning the ad’s message is made more relevant by the content it is placed next to.

Ad agency groups have also attempted to seize the agenda in helping to develop protocols to protect both people and brands online, and crack down on the spread of misinformation and abuse on digital platforms.

Last June tech companies Facebook, Google and Twitter announced they would work with ad agencies including WPP, Publicis and Omnicom, and global brands such as Procter & Gamble and Unilever, to “address harmful and misleading content”.

“If we remove inappropriate content, that makes the brand safe as well,” says Mr Montgomery. “And the platforms have joined the table recognising that.”

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments