Evaluating Dumb Ways to Die

Gosh but I love this. Everybody loves this. It’s cute, it’s engaging, it (eventually) nails the message, and the tune is catchy as heck. Love love love!

But is it effective?

The video and its associated micro-site have gone viral in the last few days. And yeah, maybe some people will criticize it for being long, but fundamentally it’s a very slick piece of work and full credit to Melbourne Metro Trains for pulling it off and getting over 14 million views within a week of posting the video to YouTube. 14 million! In a week! And while impressions are certainly a useful measure for message exposure, the objective of a safety campaign is behaviour change, not impressions or click-throughs or Facebook shares. So what about that behaviour change?

I’m reasonably confident that a campaign of this scale (you can bet this initiative was not cheap!) has an evaluation framework to assess its effectiveness with respect to the outcome of interest. They’ve got to be able to quantify changes in the target population, specifically in behaviours that contribute to the outcome of interest (or, more likely, to the outcomes they wish to avoid). Now, I realize that corporate relations people speak to the media in broad strokes and deliberately avoid detail, but when Leah Waymark, the General Manager of Corporate Relations at Metro Trains, summarizes the initiative’s evaluation criteria as “…if we can save one life or avoid serious injury, then that’s how we’ll measure the success of this campaign” pardon me if I get a bit uncomfortable. Especially when I then can’t find anything elsewhere that describes the campaign’s evaluation strategy.

When I first saw the video and looked through the campaign’s micro-site, I smiled in appreciation of the work that went into making it so fun and engaging. And yes, that silly tune got stuck in my head. Shortly after that, my mind started asking questions:

  • How many deaths or serious injuries occur on this train system each year, anyway?
  • How many of these deaths are accidental?
  • How many are the result of system or equipment failures, or employee error?

And so forth. Fundamentally, these are the sorts of questions any initiative needs to fully articulate in the developmental stages. How big is the problem? What do we think are the primary factors behind this problem? How can we target the population segments most at risk? And to quantify the effectiveness of an intervention, we’d want measures of behaviour from before the initiative begins (a baseline) and follow-up measurements to assess short- and longer-term change. For starters.

I looked around the micro-site and the Metro Trains website as well as doing my best with Google searching in an effort to learn about how Metro Trains will be evaluating the campaign. Sadly, I couldn’t find much of anything at the time of writing. Hopefully that’ll change in the coming weeks, since this is after all a very new campaign. That said, I’d like to think there’s an opportunity for organizations to be more open with their evaluation criteria, and the methods by which they aim to measure success against those criteria. I doubt that I’m the only one wishing that organizations would provide links to information about program evaluations. At the least, it would be a good proactive gesture from Corporate Relations to indicate to the public that yes, they’ve thought this stuff through, and that yes, this happy safety campaign has some real methodological rigour behind it and is more than jelly bean eye-candy and a very, very catchy tune.

It would be one simple, cheap and smart way to let the air out of the ever-present criticisms that campaigns such as this are a waste of time and money because they fail to achieve their intended outcomes (or their outcomes are mis-specified).

It’s a small thing for an organization to do. For Metro Trains, it would be a simple and smart addition to Dumb Ways To Die.