Assessing FoMO: the Value of Self-Reported Data

2015-03-14_OhNoLogo22-abby3Fear of Missing Out (FoMO) is a term any millennial knows. That deep, almost painful feeling of wanting to be where all the fun is being had. Where is that magical place? It’s wherever you’re not. I hear this in student conversations all the time, usually in relation to evening and weekend activities. When did “Jealous” become an appropriate response to your friend regaling tales of Saturday night?? Since FoMO.

Since Fear of Missing Out has such a strong hold on so many people college-age, maybe FoMO can provide some glimpses into student motivation. In student affairs, understanding topics like student motivation, development, leadership, and other similar concepts is important and impactful to our work. But how do you measure it? Capturing these concepts is messy and (mostly) requires data that comes from students own perceptions of themselves (i.e., self-reported). Numerous data people in my life look down upon self-reported data saying, “How do we know (indefinitely and without flaw) that what a student reports is ‘accurate data’?” First of all, to that I say self-reported information IS data; and second,I think this fosters a different kind of fear in student affairs professionals. We fear that we cannot “do data” because often much of it is self-reported. In order to quell the fear that some of you may be feeling, I want to look at one example of a study done using self-reported data.

A few researchers,Andrew K. Przybylski, Kou Murayama, Cody R. DeHaan, and Valerie Gladwell (2013), studied the FoMO phenomenon. I’m no statistician, and I imagine you may not be either, so I suggest you start reading at the summary (pp. 1846-1847). But in case you’re not going to read it at all, in a rough nutshell: the researchers made a FoMO scale (wow, how can I take THAT quiz??) and used it with three groups of people in three studies, which showed results you probably would have guessed (e.g., people experiencing higher levels of FoMO tend to be younger, have less satisfaction in their own abilities and connections with others, and engage more frequently with Facebook), but doesn’t it feel good when your thoughts are backed by data?! :::le sigh:::

Now that you’re interested in the study (hopefully?), jog back to the methods and measures portions of the article, and you’ll find that these researchers tapped into something many in student affairs have been wondering: how do I take all the stuff I know to be true about student motivation, development, and learning out of the classroom and test it? And then how do I capture those findings in a format that has that data-strength-je-ne-sais-quoi that can give out of the classroom learning the credibility it deserves?

I know there were some scary words in this study like “regression model” and “confidence interval”, but, if you push through that and look at the bigger picture, I hope you’ll delight in that the researchers used participant self-reported data to show impact (you know, similar to when we ask students if they learned anything about themselves at our programs).

So, hooray! Ask your students to report their learning back to you in some sort of systematic way, and work the results into your next staff meeting. It doesn’t have to be hard; you don’t want to miss out, right?

How do you feel about asking students to self-report their learning, development, etc.? Have you had the same experience with self-reported student data?

Thanks to Computers in Human Behavior and Andrew K. Przybylski, Kou Murayama, Cody R. DeHaan, and Valerie Gladwell (2013) for the great article, and to College Humor for the great video.