Day 3 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)
The TL;DR
- Advertising has made a science out of getting people to buy; cybersecurity should be able to do the same
- Targeted interventions outperform general ones by a wide margin
- “Targeting lift” quantifies how much better a targeted campaign performs
- Here’s an early experiment showing a 33 percentage-point improvement from targeting on one element alone
Our chief product officer Dr. Sanny Liao always says that, with the right data, she can get someone to buy a pair of shoes. And she’s right. At least for me. I mean, I’m a sucker for shoes.
Data is powerful because we can use it to target just the right person at just the right time over just the right medium with just the right message. And with about a million other just-the-right-things—time of day, location, channel, price point, discount, ad color scheme, tone of voice—we can get people to buy our product.
If ad people can get us to buy those shoes with the right data and enough experiments, why can’t we do the same thing in cybersecurity? The truth is we haven’t really tried until now. Some vendors make a half-hearted attempt at role-based targeting, meaning they send certain training content to certain individuals based on their role, or whether they first join a company, but that’s about it.
But our customers are starting to use Fable’s capabilities to target not just by role, but by risk. They’re personalizing intervention messages by citing people’s name, access, precise behaviors, and instructing them on what action to take next—customizing with policy details, tool names, processes, and protocols. They’re experimenting with message and nudge frequency, among other things.
Our contention is the more targeted the campaign, the better it performs in engagement and employee response. Here is a simplified comparison of two campaigns from one of our customers—one targeted to a cohort of employees and the other sent to the whole organization. To isolate the value of targeting, we chose campaigns of roughly the same duration and topic to hold as much constant as we could (though in the real world, we’d want to target on as many elements as possible). Note that the targeted campaign performed 33 percentage points higher than the general one.
We call this differential “targeting lift,” and will continue to look for even more experiments as our customers explore the additional meaningful ways they can target the right campaigns to the right people at the right time.
Be sure to check in tomorrow (day 4) as we explore behavior change and look at an example of a closed-loop campaign targeting employees with outdated device OS software, and then verifying that indeed they had taken action.