Every movement begins with a moment.
The big marketing question these days centers on understanding the value of social media, specifically in terms of ROI. Articles abound discussing formulas and algorithms to calculate this valuable number. However, the output of any algorithm is only as good as the input, i.e., “garbage in, garbage out,” as the saying goes. The best way to prevent this pitfall is to be as accurate as possible in measuring social media key performance indicators (KPIs).
There are two things to consider when increasing measurement accuracy: validity and reliability. Validity is the extent to which you are actually measuring what you set out to measure. Reliability is the extent to which you are measuring what you set out to measure consistently over time. Priority is placed on validity, as reliability assumes that the measurement is valid. So this blog post will focus on establishing measurement validity. Let’s walk through how to determine whether or not measurement is valid.
To understand validity, let’s think through an example. Remember analog televisions (you know, the ones with the rabbit ears)? In order to view a TV show, the placement of the rabbit ears had to be just right. Otherwise, you could end up with two shows playing at once, but neither playing clearly, or you could end up with complete fuzziness. The same is true with social media measurement. If you’re not precise in your measurement, you could be seeing the results of both awareness and advocacy goals, for example, without being able to parse the output completely to one goal or another. To ensure your measurement plan’s accuracy, you want to create a validity portfolio that includes the following: face validity, content validity, convergent validity and divergent validity.
Face validity and content validity are the first barriers to be considered. In some cases, ensuring your measure is face- and content-valid can be good enough, depending on the purpose, scope and cost of the measure.
- Face Validity – This is the use of the inner ocular test. Look at your measurement plan and have others look at your measurement plan to determine if it seems reasonable to everyone that you are measuring what you should be measuring.
- Content Validity – This is a test to determine that the measure covers a representative sample of all possible behaviors.
Convergent and divergent validity are deeper barriers to be considered and are related to triangulation of a measure. Triangulation is the idea that multiple measures of the same phenomena be employed in order to best interpret the data that is collected. For instance, using behavioral data in combination with self-reported data will allow the analyst to understand the overall phenomenon better than either set of data will on its own.
- Convergent Validity – This is a test to determine the extent to which the data collected from the newly created measure shows a positive correlation with other measures that are reported to measure similar behaviors. For example, if you have self-reported data from a given audience and have created a measure to collect behavioral data from the same audience, the behavioral data should be positively correlated with the self-reported data.
- Divergent Validity – This is a test to determine the extent to which the data collected from the newly created measure shows a negative correlation with other measures that are reported to measure differing behaviors. For example, if you have self-reported data from a given audience and have created a measure to collect behavioral data from a different audience, the behavioral data should be negatively correlated with the self-reported data.
The stronger the validity portfolio, the more accurate the measure is. To create the most accurate measure of social media data, you must complete as many of these validity tests as time and cost allow. The next step is reliability, which will be a topic for a later discussion.
Becca is a Digital Analyst at Moxie.
PLEASE PROVIDE YOUR INFORMATIONTO DOWNLOAD THE PAPER.