Write, measure, repeat

Daniel Stevens
8 min readFeb 1, 2019

This is some of my earlier writing. Originally published in 2014 on LinkedIn.

My first experience developing information was writing policies for hospital accreditation. But I wondered, does anyone use these? So I took a cue from TV detectives. I put a piece of clear tape on the bottom of 4 manuals and counted the number of times the tape was broken or removed. Page hits! (this was in 1995) What I was unable to determine was if people were actually using the documents.

That experience taught me producers of information have little insight on if or how their information gets used or what value the information provides. During my time with IBM, the team I worked with had just begun to search for better ways to gather that data. We did this through two primary actions:

Direct feedback: Seeing the reaction on a face, the furrowed brow or involuntary “Wow!” tells you much about the kind of experience your audience has with your information. User groups and usability tests really give you the “feeling scale” better than any survey or analytic.

Effectively measure support ticket data: Even a simple accounting of how many times a particular place in a process (any process) requires a support call can indicate an information problem. We could then conduct a usability test, look for all the places where information was unclear, unavailable, or “gasp” just plain wrong. One particular installation process (I worked in hardware) was causing quite a ruckus. So, we tracked support tickets, found the pain points, user tested the process, got a diverse team together and found out support had written their own articles for each pain point. The solution was to get everyone on the same page and come up with a single information flow. This reduced the install time by 50% and customer calls decreased by 22%. That’s a win for everyone! The mistake too often made is to measure how “good” or “bad” documentation is rather than the overall information experience for the customer.

These tried and true methods get you great data but they fail to work at scale. With the advent of online information user data started becoming available at scale. Suddenly you could “see” how many people actually viewed your information.

Data in an online world

Data begins with the most basic metric, page clicks. They tell you, at a minimum, people saw your page and you need to know at least that if you’re going to measure much else. Page clicks are fairly easy to gather and don’t generally require specialized analytics. However, page clicks alone tell you very little, and there is so much more to know.

Now we come to how our team is taking what we learn from analytics, the growth hacking world, and our own innovations to take the next step in making information better. This is the next level of data and will require something more than a standard click count. We use the full version of Google Analytics (GA) to measure results on several levels and are working on more ways to understand how our customers use information. There are essentially three levels of measurement I’ll address:

  • Level 1: Basic page and site analytics.
  • Level 2: Goal-oriented analytics.
  • Level 3: Growth hacking style experiments.

Level 1: Basic page and site analytics

For some of you, this section might be analytics 101 but I was impressed with how much I could learn with just the basics. When someone has to find information to solve a problem or understand a process the first things I want to know are:

  • How did they find the page?
  • Did they read the page?
  • Did the page solve the problem?

We started researching the answers to these questions by looking at how users arrived at our page using the following metrics in Google Analyitcs (GA):

  • Source: GA tells me customers arrive at most of our information through Google, no big surprise there. Dig a bit deeper though and results vary. Pages with a directive link such as Try our 101 tutorial are heavily sourced from the link in the application (66%) rather than from Google (22%). Where less directive help style link has a much higher Google source (61%) than native (31%).
  • Referral path: GA generally returns a very high rate of (not set) which is frustrating. However for the, in most cases, 30% or so of customers I am able to see a high level of which sections of our site are referring users to docs and if those are too heavy why. The referral path, when visible, gives you a bit more “drill down” to see where customers are when they access. I usually use it in contrast with the source metric to determine if a help link is not useful or not context sensitive, or if the information on the page needs to be more clear.
  • Source / Medium: GA will provide a combined high-level view of how the user navigated from one place to the next. Similar to the referral path this metric is usually heavily loaded. However, I use it to understand community involvement. One of our more used pages gets 80% + of its traffic from either Google or US. However the remaining 20% come from places like Stack Overflow as a (referrer) and, if we see those numbers pop up we can track down the post and find out what’s going on and contribute. We can also see if someone has linked directly to our page and what they said about us.

Now we know more about who actually saw our pages and how they got there. Next, we use the following metrics to learn if the customer used or read our documentation:

  • Avg. Time on Page: I use time on page and contrast it with page length and number of exits to understand how the time reflects what it actually takes to read or scan the information. I also look at is time on page vs source. Sometimes it’s instructive to see if customers arriving from a search spend more or less time on a page vs direct links or referrals from community sites. On one page the differences in source are: Google 5:39, Direct link 6:41, Stack Overflow: 8:41. That’s a huge contrast! After some detective work, we learned the google results had more exits at short time and that was distorting the average time for the largest portion. So drill down into results to get real answers.
  • Page depth and user behavior plugins: one thing we’ve just started researching are plugins which can compile actual user behavior on the page. This is exciting stuff but you have to be careful and make sure your use meets with your company’s privacy policy.

Finally, we need to know if this solved their problem. This is a bit tougher to answer and requires a combination of the following:

  • Traditional support ticket tracking: to determine if the number of customers accessing the information matches (bad), is more than (worse), or less than (much better) the number of support tickets covering the what the information is intended to address.
  • Measuring the page depth: to understand if customers landed on the intended page first or last. Did they exit to Google (bad) or to your product (good)?
  • The dreaded survey: to get direct customer response. I say “dreaded” because even good surveys have notoriously low response rates and give you a jumble of data. However, I think mobile finally has shown us the way combined with the pain scale from health care. One to three simple questions with emoji ranging from very happy to very angry. I’m working on a few of these now, I’ll let you know how that goes in a status update.

Those tools are the basics of understanding how customers access, view, and use the information I’ve provided. I’ve just touched the surface of what you can glean from GA. The more I learn the better able I am to narrow my focus and learn more about my customers and how they do (or don’t) use the information we develop. Now we get (finally) to the fun stuff, creating goals and driving to meet those goals.

Level 2 Goal Driven Information

This is where things started getting fun and we started thinking differently about how we constructed information solutions. We started redefining the way we look at “documentation” in the following ways:

  • We create information solutions rather than documentation. Yes it’s true this is semantics but it does get us to begin thinking like “Information Experience Writers.”
  • We firmly believe writing is still a keystone skill and we’re proud to be writers.
  • We also firmly believe that information evolves and so will we.

For each documentation project we create meaningful measures to judge if a solution is actually meeting its goals. We’re not just talking about surveys or (did this help) we are in the early stages of measuring:

  • Did the information solution guide people to an endpoint?
  • Can we follow the user trail from information to feature or product adoption?
  • Can we measure the retention rates for people who access information solutions VS those who don’t?

All this takes some more complex analytics set up. Lucky for me I inherited a good starting point. Using Google Analytics goals (careful you often have a limited number of these) you can create a path you want to drive a customer through and then view that in a funnel view to watch them progress through. The analytics capture each unique customer from the time they enter the analytics path until they exit. This works particularly well for tutorials and procedures with multiple pages.

I modified an existing tutorial with a completely new paradigm and measured the number of customers tracking through the funnel. Using this method I can see how many customers access the tutorial’s beginning and how many work their way through each section and where customers abandon the information path. This means I can measure the first requirement the information solution guided customers to a specific point. This also means I can begin to understand what customers are willing to learn and I can move to the final level by experimenting with variations.

Level 3 Growth Hacking Style Experiments

This is where we started learning what works well and how we can make that repeatable. So I took a popular tutorial, created a copy, and made a dramatic change in one section. I’ve been able to measure the number of customers who go through the new process vs how many choose the older process. So now I can measure A vs B and have actionable results.

From there I’ve now started to experiment with adding a feature to the procedure (which already captures a large audience) and measuring how many customers go through that section and on to the next. Then things get a bit more challenging, to understand if people are actually using the feature more or less I had to set events to fire every time a customer used that feature for the first time. Then measure before adding the information and for an equivalent period of time when the information is available. Compare the rates for significant differences and I got the result there was a 3% increase in feature use after the information was added which is, unfortunately, not significant. I’ve had better results with other experiments with the biggest gain being around 5.5 percent. The result measuring the information itself has been much more successful, I can see a 12% increase in user retention through stage 3 of the process and a 27% decline after adding a different feature which I removed after the experimental stage.

This has only been 9 months work and it’s already teaching me and us so much about better ways to write, measure, and repeat. We live in a new era for Technical Communications where we must be ready to consistently create a better experience for our users, drive value for our companies, and be able to prove we’re doing both.

--

--

Daniel Stevens

I create content design for humans across the world of work and believe humankind still has a bright future to grasp.