_______________________
The history of the event industry can be characterized as an unending search for the next big “WOW”.
Today’s corporate events and conferences are filled with the best ideas and technology from television, entertainment and social media. They are complex and expensive undertakings requiring large internal teams to develop and support the content, a large portion of the sales force to host the audience, and armies of specialized freelancers to execute the logistics.
Often corporate events cost more then a Super Bowl campaign. Which begs the question of why measuring the business impact of an event has never been an integral part of these complex undertakings.
Dynamics Driving Corporate Event Measurement
We believe that a sea change in corporate event measurement is underway, driven by two very different forces.
The first is obvious, economics. CMO’s in every industry are under increasing pressure to demonstrate a return from every line item in their budgets. For the first time, innovative companies are conducting market research to determine how the effect of events at influencing brand perception, accelerating pipeline and ensuring customer loyalty through education.
The second dynamic is that customers are now making enterprise level purchase decisions based on their own independent online research. Traditional marketing departments have lost control of the dialogue, and are no longer the only source of product information. No one knows where it goes from here.
Development of The AIR Score
What is needed is a way for event marketers to identify the issues most likely to garner online commentary from their attendees. Working with our client Scott Schenker, Vice President, SAP we developed a technique called the AIR Score, short for Audience Impact Rating.
The genesis of the AIR Score was the realization that the two most commonly used reporting conventions, “Top Box” and “Averaging” are both designed to present data in a way that all but ignores those most likely to be part of an online discussion.
The Pitfalls of Top Box Scoring
The “Top Box” system adds the percentage of responses in scoring boxes 4 and 5, and reports the total as the result of the question.
This yields sentences like “80% of the respondents found the xyz aspect of the event to be somewhat or extremely valuable.”
This approach has two shortcomings:
1/ Top Box scoring paints an unduly rosy picture of the results.
“Top Box” scoring combines the ‘5 ranking’ which indicate that the respondent is “extremely” positive; with the ‘4 ranking’ which indicate that the respondent is politely noncommittal – the “somewhat” 4s.
This example clearly demonstrates the problem. A “Top Box” Score of 80% can be derived in many ways, which in no way can be considered equal.
2/ Top Box scores provide no insight into what is going on in the other three boxes.
Yes, a veteran executive or manager with the time to read through the data should pick up these distinctions. But they are not readily apparent in the reporting that most people rely on to make decisions.
The Pitfalls Of Averaging
As the name implies, averaging focuses attention on the middle, not on what is going on at the fringes…
This example demonstrates that while “Averaging” is more responsive to the audience then the “Top Box”, by design it mutes (damps) the extremes, the respondents that we are the most interested in.
The AIR scores in this example shift 20 points, moving from Good to Poor, clearly signaling an increasing number of Detractors. The Weighted Average has a subtler downward trend, within a range (north of 3.5) that is considered acceptable by many companies. This is an important distinction.
What An AIR Score Does The AIR Score was developed to provide event sponsors and managers with a metric that enables them to quickly identify the issues most likely to influence the larger universe of clients and prospects post-event. The AIR Score is calculated using the data from a Likert scale response.
AIR categorizes the survey respondents into three segments.
- The Promoters are enthusiastic about the item in question.
- The Neutral group is neither unhappy nor enthusiastic.
- The Detractor group is negative and unhappy.
5) Extremely Valuable |
Promoters |
4) Somewhat Valuable |
Neutral |
3) Neutral |
Neutral |
2) Not Very Valuable |
Detractors |
1) Not At All Valuable |
Detractors |
Our hypothesis is that the Promoters and Detractors are much more likely to share their opinions then the Neutrals.
The AIR Score reports the relationship of Promoters to Detractors among all scores as a number between 0 and 100, where 100 are all Promoters.
Though they are based on the same data, neither “Top Box” nor “Average” explicitly reveal this relationship.
In effect, this is grading on a curve that is biased so that a response of ‘somewhat valuable’ has the same value as a polite ‘neutral’.
Applying the Air Score
The AIR Score factors the entire range of scores (all responses) into account (i.e. it is normalized).
We, and most of our clients deem an event to be successful when significantly more attendees go home as Promoters then Detractors. We developed the following scale to aid in interpretation of the scores.
Because the AIR Score reports the results as a single number, it is a useful tool for comparing scores from different questions, and even different events. It can be applied after the fact to any historical Likert scale data; and can be used to compare data gathered using unbalanced scales with data collected using balanced scales.
While for know marketers sponsoring virtual events seem happy to count ‘clicks’, ‘likes’ and ‘tweets’, we are already engaging in discussions about how to connect the participant experiences. The AIR Score will be an important bridge.
We are happy to share the “math”. We invite you to contact us if you have any questions, or would like to have the formula to apply in your own work.
Christopher Korody and Kevin O’Neill are the Partners at Audience Metrix, a market research firm focusing on conducting research at corporate events. chris@audiencemetrix.com