Human nature dictates that often we ask questions we believe we already know the answer to. And, much as they may deny it, market researchers often believe they know the outcome of their survey before they send it. Sometimes responses reveal that their assumptions of the audience are off the mark or that their beliefs are not, in fact, as dominant as they had thought. However, it is rare that a failed survey can actually produce more interesting insights than a successful one.
This is the scenario that Cheong, Aleti and Turner found themselves in when their survey of alcohol consumption habits among Australian Twitter users returned only five legitimate Australian responses out of a pool of 250. However, rather than crawl away with their tail between their legs, these researchers chose to question why it is that the medium of Twitter was such an unfruitful platform for the survey.
One of the discussion questions in Cheong, Aleti and Turner’s discussion paper asks whether, based on the observations in this case study, it’s possible to conduct a survey through a social media site such as Twitter? Of course it is possible to conduct a survey through social media sites such as Twitter. Websites such as twtpoll and Survey Monkey provide user-friendly platforms to create surveys, disseminate them via social media and record the results. Twitter itself recently introduced Twitter Polls which enable users to ask one question of their follower network and poll the results over the next 24 hours. The challenge that Cheong, Aleti and Turner encountered was not in conducting a survey, but in conducting a targetted survey based solely on user posts.
Twitter users and, I believe, most regular users of social media and email have grown distrustful of messages from accounts they do not recognise and desensitised to the spam-like language attached to these forms of communication. A major driver of this distrust is the prevalence of ‘bots’ which generate huge amounts of online content. In particular, a Twitterbot is a program that is used to create automated posts on Twitter. They take a wide variety of forms, including bots that automatically follow users, generate spam, entice clicks and even to post @replies or retweet posts that contain specific words or phrases. In fact, an IULM University survey recently found that up to 46% of followers of surveyed brands on Twitter are Twitterbots.
In this context, it is understandable that Twitter users who received Cheong, Aleti and Turner’s survey request, disregarded it or even reacted with anger to the approach. An unsolicited survey request is an annoyance – it asks the user to donate their time with minimal reward. Respondents need motivation to complete the survey, perhaps to support an individual or organisation they approve of, to give insight into a significant social issue or to play their part in answering a question they would like answered.
In the case of Cheong, Aleti and Turner’s survey, the user is given no clear reason as to why they have been chosen – they are not members of a drinking-related group, a particular industry or institution which gives them reason for being chosen. For instance, if the survey had chosen to look at drinking among nurses then individuals would understand why they had been approached and may respond with the intention of helping others in the industry. Conversely, selecting users based only on the content of their posts does sound exactly like something a Twitterbot could be programmed to do… the survey would likely have achieved similar results if it had been disseminated by a Twitterbot, and at least it would have saved the researchers some time.