3 Ways To Track User Attention On A/B Tests
If you’re testing a new element, it might be useful to know if people are actually paying attention to it or not. If we only look at the primary effect of a test result, we run the risk of having a compromised test if the change wasn’t even noticed in the first place. For this reason we began exploring ways to track user attention on some of our A/B tests. Here are at least three ways to do so:
Method 1: Track Hovers
Users might often hover over what they are looking at. Although this isn’t the strongest indicator of attention, do still consider it for interactive elements. If a widget contains input elements, it’s more likely to receive intentional hovers. If it’s placed where people will hover unintentionally, then it’s less reliable.
Method 2: Track Scrolls Into View + Timer
When the element you’re testing is well below the fold and you want to confirm how many people actually reach it, you can track scrolling position. For one of our projects, we were testing a competitor comparison table. In order to see how many people read this section, we tracked 2 things: how many people scrolled and stayed at it for 3 seconds, and secondly, how many people stayed at it for the full 30 seconds. The 3 second mark ensured that people stopped and started paying attention (not just glanced at it or scrolled past). While the 30 second mark told us both how many people dropped off and how many kept on reading. This way we could tell how interesting the content was initially and also how well it held a longer span of attention.
I’ve created a jQuery plugin to fire an event when a visitors scrolls to one or multiple elements (and can include a timer). Here is a quick demo as well.
Method 3: Track Clicks
Clicks are the most certain way to gauge attention, but it only works with clickable elements. Sometimes, you can create an opportunity for clicks. For example, when we were testing the tabbed competitor comparison table, we also tracked clicks to see if people interacted with it. By doing so, we were able to gauge the level of interest in specific tabs of content (with individual comparisons). On other tests, you can also add links or buttons, which make the content more interactive and create tracking opportunities.
Exaggerate The Change Intensity
One additional strategy we might employ in combination with attention tracking, is to set a higher intensity for the change during the test. It’s sometimes helpful to exaggerate the change being tested to make sure that more people see it and react to it. Once you know that your change has been effective, you can tone down its implementation to fit your brand style.
Track Distraction Pages
It’s common for us to track page visits to pages that take the user away from our primary goal. This can help to spot unintended effects of a test. For example, if we are testing a “Get Quote” widget on the home page, we might also track visits to the alternative pricing or blog pages, even though that’s not where we want people to go. Tracking such distractions might begin to hint that people are beginning to avoid the widget.
You have nothing to lose by collecting additional data about the user’s attention. Understanding whether users have even noticed the change or have spent enough time on it can be a valuable input into follow up tests that complements the effect. It’s a good way to see the bigger picture.
See Demo Of The Attention Tracking Plug-in Or Get The GitHub Code