A study into social media trends during the US elections last year found out that 20 per cent of election-related tweets sent out between September 16, to October 21st, were generated by bots – robot or software application that runs automated tasks (scripts) over the Internet. Today, social media bots are increasingly easy to create and hard to detect.
With the help of YouTube instructional video and a Google spreadsheet, one can create a simple Twitter bot in minutes. Distinguishing between real users and bots can be difficult but there are some telltale signs that can give users a clue.
Bot accounts tend to have long user names, are younger with suspiciously high follower counts, and heavily feature retweets in their posts. Bot accounts also tend to interact with other bots often retweeting each other creating digital noise.
Using these parameters, researchers at Indiana University developed a portal, botornot.org that can tell you with a fairly accurate degree of certainty whether your followers are real or not. The Financial Standard run through several twitter accounts associated with several high profile bloggers and influencers and found that a large portion of them to be bots.
US President Donald Trump’s prolific use of Twitter to directly reach his support base and the spread of fake news online in recent months have raised questions on the ethics of using social bots to execute communication campaigns. Not all bots are harmful.
Many companies with a heavy online presence have bots that act as virtual customer assistants that interact with users and answer minor enquiries 24/7.
Airtel expands digital literacy campaign
Harmful bots on the other hand have the potential of portraying misleading impressions of grassroots support. With just ten highly active influencers with more than 10,000 followers and a handful of bots, one can make any topic trend in Kenya in a matter of minutes. While such a campaign might be good for creating visibility and awareness around the issue/person being promoted, it does little to promote critical engagement with the actionable public since many users typically stay away from sponsored hashtags. In addition to this, social media bots have the ability to distort stock market valuations. In one instance in the US, bots were used to create social media chatter around a listed company artificially raising the stock price with investors losing millions in losses once the anomaly was rectified.
In Kenya, the threat of misinformation through sponsored hashtags can be understood by evaluating the use of twitter during the Westgate terrorism attack. For communication experts and social media analysts, the Westgate attack was a case study of the danger posed by unfiltered information shared across networks faster than users can possibly verify.
One study by a group of researchers from Israel found out the initial confusion in the twitter communication that ensued posed a threat to the ongoing security operation. At least twelve hashtags flared up on Twitter as a narrative of events quickly coalesced around hashtags like #WestgateMall #WestagateAttack #WestgateShootOut #WeAreOne and #UnitedWeStand among others.