On-Campus
Exhibits
Industry
About AEF | Newsletter | Site Map | Legal | Advanced Search
 
Print Version

Making Hard Choices for Native's Survival

Some native advertising on publisher sites works extremely well. Most does not.

Around two-thirds of people who reach typical content will be engaged for more than 15 seconds, but on native content that drops to one-third. Only 24 percent scroll down a native advertising page at all, compared with 71 percent for normal content. Less than one-third of those people that do engage will make it past the first third of the article. If a goal of native is to seamlessly replicate the user experience one sees on surrounding content, then most of it is failing.

Significant differences in quality are to be expected. The problem is that the metrics we’re using often can’t parse the good from the bad. We risk ending up in the same place as display with indifferent CPMs and frustrated advertisers.

Early digital brand advertising was characterized by high CPMs and low understanding. Extraordinary prices were charged largely based on hype and jazz hands, and metrics were chosen with little thought to their long-term impact. Soon, publishers found they could make a case for content and context but not quantify it when it mattered. The metrics they chose couldn’t differentiate dirt from diamonds—so neither could their prices.

With advertisers now looking to create content, media is again going through a period of excitement where publishers can charge high prices and advertisers will pay them. It’s tempting to look again to metrics that explain little but shine a positive light. However, metrics that do not correlate with quality or an advertiser’s core goals will make it harder for publishers to charge premiums once the novelty of native has worn off.

With native we want to understand if the content communicated our message to our target audience, but some of the most common ways of ascribing value do not measure the content at all. Native distribution networks often charge on the number of impressions that the link to the content receives. One could have a blank page waiting for the visitor after the link and the feedback and cost to the advertiser would be the same.

At least with pageviews one knows that the link was clicked on, but they say nothing of what the user did beyond that. Moreover, it is often easier for publishers to simply buy pageviews than to build audience. While some publishers work with marketers to create great content and attract their audience to it, others can effectively buy an A grade with far less effort.

When it’s easy to buy traffic for $5 and sell it for $30, it’s hard for advertisers to differentiate good from bad. Some paid promotion can be helpful, but an advertiser buying one audience and being given another is not sustainable. Moreover, buying traffic means publishers have less incentive to understand how native can appeal to their core audience. Marketers, facing a constant litany of campaign “success” stories, lose a vital feedback loop that would help them craft better, more vital content.

Social shares can seem like a silver bullet, but are easily gamed and it’s important to note that there is no correlation whatsoever between the amount of social shares and the average amount of attention a piece of content accrues. Social shares are a measure of social sharing, not of engagement with an article. Classical time spent at least attempts to measure the impact of the content, not the link, but does so inaccurately.

At a recent conference, one publisher remarked that visitors were spending seven minutes on some of their paid content. Looking at their paid content against an average reading speed of 250 words a minute, this suggests that their audience was more than three times slower than the average reader. Time spent simply counts between page-load timestamps. It ignores visitors that bounce, and has not kept up with a world of multiple browser tabs and coffee breaks. It is well-intentioned, but so inaccurate that advertisers cannot trust it and publishers should not sell on it.

If not these metrics, then what?

With native, advertisers seek to communicate a specific message to a target audience in a format that replicates the normal user experience of the host site. Understanding success requires asking: How much of the audience that came is the audience the advertiser wanted? How much did they actually engage with the content? Were these just hollow clicks?

The rubric is "which audience was promoted in the pitch?" If the story hinged on the business-savvy audience the site normally attracts, then advertisers should ask: How much of the audience had visited the site in the last month? And how much of this audience was attracted directly from the publisher’s site?

Publishers able to attract their core audience should charge more of a premium for their hard work. If the visitors are disproportionately new to the site, advertisers should understand that while they are buying the publisher’s brand, they may not be reaching their audience.

So how much did the audience actually engage with the content? An advertiser’s goals are not met simply by the right audience landing on a page. The delta of value between someone who bounces immediately and those who engage with the content is significant. Focusing on content performance also balances the responsibility for success between advertiser and publisher. If the publisher drives a quality audience to the page but the advertiser’s content is poor, then the campaign will still founder. Both need to be at their best to succeed.

Scroll behavior can give useful outer bounds of behavior, showing how many bounced without scrolling at all and how many completed the content.

However, the most valuable and scalable way to measure is by capturing second-by-second attention data that can infer when a visitor is distracted or engaged and give an incredibly accurate picture of how well the content is connecting with its audience. Upworthy calls this Attention Minutes and Contently and Chartbeat call this Engaged Time.

Advertisers are likely to attempt to compare metrics such as uniques and engaged time across their campaigns. However, user behavior differs greatly across sites and this may be less useful than expected. Instead, we should look at how closely the content created a user experience similar to normal content on the host site.

To be clear, paid content is not normal content. Normal content has one master to serve, while paid has two. It is hard for native to reach the standard that normal content does and advertisers should account for that. Nevertheless, sites including Gizmodo, Refinery29 and KSL.com show that it is possible.

It's up to us to ensure native has a strong future where marketers and publishers work together to craft content that attracts and captivates the right audience. That means starting with goals and metrics that align with what advertisers actually care about. It means recognizing that these are early days and both sides need feedback more than empty A grades.

If we can do that, then native does not just act as a band-aid on the open wound of display, but as a key ingredient of a Web where quality matters.

 

Tony Haile, Adweek. May 2, 2014

Copyright © 2014 Adweek. All rights reserved.

 

irish-civil