Saturday, October 10, 2020

Russian Bots - A Fallacy of The Democrats

 “Russian Bots - After the 2016 US presidential election, mainstream media outlets experienced a major case of sour grapes, blaming Donald Trump's victory on Russian bots. How exactly did Russian bots influence the election is never quite explained, but the narrative appears to consist of simply repeating the phrase “Russian bots” over and over again without context. In one 2017 The New York Times article[ 65], we get a picture of a fake Facebook account that supposedly acted as a Russian bot, lashing out against Democrats to stoke resentment. The article doesn't investigate the reach this account had, but based on the image in the article that shows one of its posts, the fake account had a grand total of one like on it, which could have been the bot itself. We've previously established that Facebook is reluctant to delete user accounts as they fatten up their user stats, but after the Russian bots’ narrative, Facebook started deleting roughly a million accounts a day, likely hitting many legitimate Russophile users in the process. Twitter initially allowed bots but also buckled under pressure from the mainstream media to start purging accounts. Bots online would rather be used to create a fake following, catapulting someone into the media stratosphere. Let's imagine Rick wanting to sell his products through Twitter. Rick would otherwise have to painstakingly build his account for years on end before gaining a foothold; Rick can also simply contact a third-party Twitter botnet owner to buy in the ballpark of 100 thousand likes and retweets for a couple of hundred dollars. It's not only easy, but it's laughably easy and cheap. Of course, if Rick makes 50 tweets and only one gets that kind of attention, the users will notice something fishy and debunk him, but the point is that social media exploit our inherent desire for social proof, one of the principles of social engineering. We want to know what everyone else is thinking and be involved in the coolest new thing. Never trust the publicly displayed metrics, such as the number of likes, dislikes, views, upvotes and so on. These are simply numbers in a database anyone with an admin access can edit as he or she pleases. In a most recent example from November 2018, a video game studio Activision-Blizzard announced a mobile spinoff of a popular franchise, Diablo. The fans were furious and bombed the Youtube announcement videos with nearly half a million dislikes, but the dislike count kept dropping by a hundred thousand at a time[ 66]. Google can always hide behind the “it's the bots disliking and we simply removed them” but the problem is there's no transparency in either the upvoting or downvoting process on any of these platforms, let alone when it comes to vote removal and account banning processes. Accusations of Russian bots interfering in US politics have made the political discussion even more toxic than usual–since there's no way to prove or disprove any of them. It's the ultimate exercise in solipsism, arrogant belief that only the speaker is real and everyone else is a figment of his imagination. As we've discussed previously, Alan Turing devised a machine that could mimic human chatting behavior in the 1950s, and there hasn't been any headway on countering it since. There is simply no way to know if any content made online is produced by a bot or a human, which means we should judge all content based on its merit rather than the originator's intent or association. If bots can produce better content than humans, then they should be embraced regardless of who runs them and for what purpose. This doesn't mean bots don't exist online, but they're not necessarily Russian, and they don't necessarily want to interfere in the presidential elections. Just like we saw with Stuxnet, nation-sponsored bots and malware would try to avoid detection by hiding from humans, not interacting with them. When bots or malware want to communicate in public, it would be done in a way that goes unnoticed, as was the case with one malicious Firefox plugin. In 2017, ESET security researchers discovered[ 67] one such plugin was fetching its orders by visiting Britney Spears' Instagram photos and combing through comments until a special one was found. The plugin would hash each comment until it found one that hashed to 183, and then inserted that comment into a special formula to extract a web address where the Turla command server lay in wait. The comment itself was completely ordinary, except that it had a few weird hashtags and typos but otherwise could have just as well been a regular user typing on a phone keyboard in a hurry. You would never notice this kind of comment or think it strange–since everyone's hopping from one page to the next in search of amazing content. So, try to be as precise as possible to eliminate doubt as to whether you're a bot or not and try to create content that stands on its own merit rather than simply sharing what's upvoted, retweeted or liked. Don't accuse anyone of being a bot. Some bots can be operated by humans so the bot account could change behavior on a dime. The very accusation of being a bot is very dehumanizing, so only make it in private to the website owner; don't say it in public. There are online services that gauge Twitter account's behavior to tell you if it's a bot[ 68]. Use them before throwing accusations.”


— Cybersecurity: What You Need to Know About Computer and Cyber Security, Social Engineering, The Internet of Things + An Essential Guide to Ethical Hacking for Beginners by Lester Evans

No comments: