In addition to finding a sustainable business model, today’s publishers operating in the digital arena are struggling with another challenge: properly measuring their content performance and truly understanding the behavior of their audience. This is a real issue for all sorts of publications, regardless of whether they’re advertising-funded or part of the new breed operating under a reader revenue model.
Things were a bit simpler in the past. Publishers could measure their business success by noting and comparing the numbers of sold newspapers or magazines over time. They could then estimate the size of their readership by multiplying the number of sold copies with 2 or 2.5, which is considered to be the average pass-along rate. Information on circulation success was particularly important for advertisers who wanted some proof of value before investing in ad space.
Legacy media publishers still rely on this type of calculations because let’s face it – it’s as good as it gets.
Once they’ve entered the digital era, publishers discovered new ways to monetize their content. However, they also found themselves in unfamiliar territory. Measuring content performance now implied using analytics tools and becoming data literate, which for many publishers turned out to be a big bite to swallow. Hence, the fallacy of trusting single metrics.
Let’s try and see why single metrics such as
- Pageviews
- Time on Page and
- Returning Visitors
cannot be reliable metrics for publishers who want to measure their content performance, understand their audience behavior, and pinpoint their loyal readers and nurture a strong relationship with them.
1. Pageviews
Pageviews have been ever-present for measuring ad performance and the popularity of product pages on ecommerce websites. This metric first took off with Google Analytics, which is one of the best-known analytics tools out there, designed primarily for ecommerce businesses.
The problem with Pageviews:
Unfortunately, in the absence of anything better, Pageviews were soon adopted as a legit metric for measuring content performance by many analytics tools on the market.
Here’s how pageviews have been falsely interpreted by many publishers: more pageviews equals more visitors and more engagement. If some piece of content generates many pageviews, it’s ultimately better than the rest of the articles, right?
Not really.
Let’s approach this problem systematically.
Here’s how pageviews have been defined within Google Analytics:
A pageview (or pageview hit, page tracking hit) is an instance of a page being loaded (or reloaded) in a browser. Pageviews is a metric defined as the total number of pages viewed. […] If a user clicks reload after reaching the page, this is counted as an additional pageview. If a user navigates to a different page and then returns to the original page, a second pageview is recorded as well.
There is also a metric called Unique Pageviews that represents a number of sessions during which a certain page has been viewed at least once. So, if a certain user visits the page in question, then moves away from it and comes back to it again within the same session, GA will count 1 unique pageview.
However, Pageviews is a browser metric and it doesn’t describe the nature of the connection or the level of engagement site visitors had with your content. Not by far.
A person might open a certain article and then close it immediately, or leave it open in a browser tab while doing something else. The script of the analytics tool will record it as a pageview regardless.
We could say that the more precise name for Pageviews would be Page-Loads, since this metric does not necessarily show the number of people who viewed the page, but the number of times the page was loaded in the browser.
How publishers try to make sense of Pageviews:
Publishers and content marketers may try to make more sense of this metric by watching how it correlates with other single metrics available within the GA and similar analytics tools.
For instance, they will look at the combination of single metrics available: Pageviews, Average Time on Page, and Bounce Rate. So, the common “formula” for estimating whether or not a certain article performed good goes something like this:
High number of pageviews + “good” Average Time on Page + low Bounce Rate
“The ideal” Time on Page would be the one that corresponds with the necessary reading time for the article in question. Average speed of reading is roughly 265 WPM, so publishers sit and do some simple math: if their article has 1500 words, it would take around 5 minutes and a half for a person to read it, top to bottom. Of course, not all site visitors will read it through, so Average Time on Page will be lower. The tricky part for publishers is to decide what time would be acceptable here, i.e. what is “good” Average Time on Page.
The key problem with this? Well, the way Average Time on Page is calculated within GA and similar tools can mess up your assumptions (see the following segment called Time on Page / Average Time on Page).
By definition, a bounce is a single-page session on your site. The Bounce Rate is the percentage of single page visits. Bounce Rate for a page is based only on sessions that start with that page.
So, publishers think: the lower the Bounce Rate, the better. In theory, they are right since this indicates that people were interested in other content published on your website, i.e. they decided to browse further. But information on the way they actually engaged with your content is not available in standard GA’s reports. You can presume some of them sticked around on your website, but that’s all.
Online, you can find information regarding the ideal Bounce Rate values: they are not higher than 40%, while average values go up to 55%. However, you should set a baseline according to your own website and not chase figures and norms that work for someone else. Plus, Bounce Rate values can be horribly misleading if they are not interpreted properly. Context is important too: for instance, if a contact page has a high Bounce Rate, it’s not that it doesn’t provide value. It simply answers a specific query for users who then don’t feel the need to browse any further.
How we approached this problem:
As opposed to Pageviews in GA and similar tools, at Content Insights – we have developed complex metrics. Our analytics solution has Article Reads, which focuses on real human behavior, as it takes in mind real time spent on page, but also the way people interact with the page (e.g. clicks, text selection, scrolls, etc.). In addition to Article Reads, CI also has Read Depth as a complex metric that unveils how deeply a visitor has got into reading a piece of content. For greater precision, it relies on the combination of several metrics, one of them being Attention Time. In addition, we also have Page Depth which calculates the average number of pages visited after a reader opens the initial page, or article.
2. Time on Page / Average Time on Page
Many publishers look at Time on Page and Average Time on Page when trying to define which content could be considered engaging. They think that the longer people stay on a certain page, the higher the probability that the offered content is engaging.
However, after realizing the way this metric is measured, you’ll see it doesn’t provide any reliable insights.
The problem with measuring Time on Page:
Google Analytics and similar analytics tools measure these metrics only on the browser-level, which does not say anything about the way people engage with content.
When a person navigates away from the page but leaves the tab open – Google Analytics and similar analytics tools cannot register that. As far as the analytics is concerned, the person never left the website. Also, GA can’t measure the time a user spent on the last page of their visit to your site. Plus, if the visitor leaves after viewing just one page (i.e. if a visit is a bounce) – no time will be recorded at all.
As you can see, this data does not properly reflect the level of reader’s engagement with your content.
How publishers try to make sense of the Average Time on Page:
Some publishers deploy event trackers, such as scroll depth, in an attempt to get more accurate reports and ensure that time on page is measured even if the page is a bounce. However, it’s not that simple.
When it comes to relying solely on scroll depth, there is an underlying issue regarding:
- user’s real activity
- the location of the fold
- the length of the article
Let’s say a person scrolls through the 60% of your content, but they are doing so on a screen that is not zoomed at 100% but on 75%. They can see the rest of your content and do not continue to scroll down.
Or, let’s say they are at the 60% of your content, but they remain there for half an hour (the page stays open and they move away from their computer), before finally bouncing off. In addition, just because they scroll through your content doesn’t mean they actually read it. And what if the article is not very lengthy? Scroll depth will be 100%, but this doesn’t mean that this particular article has generated more engagement or is better performing than others.
Needless to say, even with event tracking, the reports might not be accurate as they do not provide a full picture. Data discrepancies are not rare, so account owners might notice in their report that the average time on page is longer than the average session duration, which doesn’t make much sense. In Google Analytics, this is called “lost time”.
How we approached this problem:
Unlike GA and similar analytics tools, Content Insights measures Attention Time, which is the actual time a user spends on the page consuming content. It does not take in mind the idle time, i.e. the time a person is not active on the page or is away from the page. So, what you get with this metric is the actual engaged time.
Our analytics solution relies on a complex algorithm called Content Performance Indicator (CPI). CPI is always presented in the form of a number, from 1 to 1000, with 500 being the baseline (a.k.a. the “norm”) for the observed website, section, topic, author or article.
CPI takes into consideration dozens of different content performance metrics and examines their relations. It also weighs them differently according to three behavioral models: exposure, engagement, and loyalty. So, we have developed three CPIs that measure these behaviors: Exposure CPI, Engagement CPI, and Loyalty CPI.
In the context of engagement, we have Engagement CPI that’s calculated by measuring attentive reading and the reader journey within the site or domain. It offers a far more advanced and precise way of measuring engagement compared to simply examining Time on Page, which is a single metric within GA and similar analytics tools.
3. Returning Visitors
In order to understand what Returning Visitors are, we have to briefly examine the way Google Analytics and most of today’s analytics tools track users.
The first time a certain device (desktop, tablet, mobile device) or browser (Chrome, Firefox, Internet Explorer) loads your website content, Google Analytics tracking code assigns a random, unique ID called the client id to it, and then sends it to the GA server
Unique id is counted as a new unique user in GA. Every time a new id is detected, GA counts a new user. If the user deletes browser cookies, the ID gets deleted and reset.
Having this in mind, a Returning Visitor is the one who uses the same device or browser as before to access the website and start a new session, without clearing cookies. So, if Google Analytics detects the existing client id in a new session, it sees it as a returning visitor.
The problem with Returning Visitors:
The problem with calculating Returning Visitors is obvious: analytics tools might count the same visitor who returned to the website as new – just because they’ve changed their device or browser, or cleared their cookies. There’s not much anyone can do about this since their client id gets changed this way. It’s not possible to track users across different browsers and devices. Also, Google Analytics might count the same visitor as new and returning, if they return within a certain time period. This means there can be an overlap between new and returning visitors, which causes data discrepancies. In addition, the same user might be counted twice for the same source/medium.
However, there is a much bigger issue here:
Many publishers have accepted Returning Visitors as a metric that indicates the number of loyal readers, which is a logical fallacy.
Returning Visitors indicate the number of people who visited your website in the past and then came back. However, this report says nothing about:
- How good your content is at engaging visitors
- The actual human behavior (how people interact with your content)
- The frequency and recency of their visits
- Whether or not those visitors are actually loyal to your publication or just occasional snoopers who were on your website before (i.e. have these visitors formed an actual habit of visiting your publication or just happened to stumble upon your website more than once across a certain time period for XY reasons)
To better understand this metric, we can try to explain it with a simple analogy. If a person goes to a store, leaves and comes back again, without any specific intent or actually making a purchase – is this person a loyal customer by default? Not really. They could be, but you cannot really know.
Once again, we have to underline – Returning Visitors measures browser activity and it has nothing to do with loyalty.
How publishers try to make sense of the Returning Visitors:
Many publishers choose to ignore these calculation fallacies or they are not even aware of how things are truly measured. They take in mind the New vs. Returning Visitors ratio to get the top overview of the type of traffic their website is attracting, even if it’s not very accurate. They then compare things like the number of sessions or average time on page, in attempt to unlock the similarities and differences between how returning and new visitors engage with their website. In addition, they might choose to apply segmentation and generate custom reports for more details on their visitors.
Still, these reports are based on single metrics that don’t provide actionable insights when it comes to measuring content performance.
Another thing that publishers can use to get more accurate data is tracking user id, i.e. establishing a login system on their webpage where users can login. When logged in, users can be easily tracked across devices. However GA doesn’t work retroactively, so if you do choose to implement a login system – it will not connect any previous sessions. The burning issue here is that your visitors are not likely to choose to log in to your website if the content is available regardless.
How we approached this problem:
Content Insights’ Labs Team has been particularly interested in understanding and defining loyal readers, and finding a way to measure loyalty.
Finally, we have defined loyal readers as “routinely highly engaged”, since it most accurately corresponds with their habitual behavior. There is a specific way their “Active Days” are counted within the CI’s analytics to ensure they are truly interacting with the content.
Unlike other analytics tools, we measure loyalty on the content level because that’s what truly matters. Publishers want to identify those content pieces that encourage loyal behavior and perhaps contribute to converting loyal readers into subscribers.
With the latest improvements of our Loyalty CPI, it’s now possible to measure exactly that. This behavioral model looks at how articles are contributing to the overall loyalty of your reader base on the website.
“If it ain’t broke, don’t fix it”
We have created an overview of the most frequently used single metrics and showed in great detail why basing content performance reports on them is wrong.
The burning issue here is that many of today’s publishers won’t bother to understand the way things are calculated.
For example, publishers will truly believe that when they request the Audience Report in the GA – they’ll get accurate and reliable insights on how their audience consumes their content. But each report in GA as the out-of-the-box tool relies on single metrics that describe browser events.
These reports cannot properly measure human behavior and its complexity, no matter what you call them. Many analytics tools on the market have built entire narratives that are in fact false and misleading – since you cannot really measure things that are promised to you.
You can call a cat a tiger and pretend it’s ok just because they belong to the same family tree of felines, but at some point – the mistake will rise to the surface and become painfully obvious to all key stakeholders. A meow is not a roar.
Some publishers are beginning to realize the fallacy of believing single metrics when measuring content performance, but they choose to turn a blind eye. Others are not yet aware of the fact that the problem even exists.
Given the fact people are naturally very resistant to change, many publishers stick with the “if it ain’t broke, don’t fix it” principle. Their logic is sane: they’ve been using single metrics and managed to make ends meet. Change means there is a danger of losing control, it has ‘uncertainty’ written all over it, it imposes additional work, and is generally scary – even terrifying.
However, things ARE broken and they DO need fixing.
Just like all fundamental changes, this shift from single metrics to complex metrics follows the so-called Hemingway Law of Motion: it’s happening gradually and then suddenly. And just like with any type of disruptive technology or method that pushes the world forward, early adopters gain competitive advantage. We’ve seen it happen. That’s how progress works.
Now the spotlight is on you. Which analytics do you use? How do you make sense of data? What’s your “north star” metric for measuring content performance? We invite you to join this conversation and share your thoughts in the comments below.