Russian bots pose threat, can be beaten with knowledge
Cyber attacks on elections are becoming more common, responding to them requires attention
November 8, 2018
The growth of Russia’s cyber-influence around the world is a bad sign for the U.S. Its actions during previous election seasons and throughout 2018 are suspect at the very least. Many more covert operations have taken place which are dangerous and closer than ever.
An example of this influence is the tactic of creating fake accounts, known as bots, to incite political division among the citizens of a foreign nation.
Thanks to the work of media organizations like The New York Times and BBC, Russian bots are common knowledge and spotting them is fairly straightforward. Bots have a vast catalog of posts, are relatively anonymous and incite division by taking arguments to the extreme.
Bots are a simple tactic used by the Russian government but they aren’t the most effective method it uses. The public got a pretty good understanding of how to avoid these bots early on, and many journalists worked hard to stop this problem.
However, Gabriella Bedoyan, instructor of communication at WSU, is worried about the public’s understanding of this issue
“I don’t think people understand that your social media profile has been created based on your history,” Bedoyan said. “Bots can hijack that information and use it for whatever.”
Since its early use of bots, Russia adopted several new tactics to damage the U.S. and its citizens. The simplest of those is producing fake news, but this false information has a more complicated goal than destruction.
Russia was putting out fake news similar to that of attack ads. It designed these posts to promote uncertainty and doubt within the public. These posts are characterized by comments based on a script designed to create fear by dummy accounts, according to a Time Magazine report.
“Campaigns,” as they’re commonly referred to, usually employ a few loyal supporters of Russian President Vladimir Putin and a substantial number of wage employees. These farms create posts which, in addition to artificial intelligence, account for a vast amount of the content infesting our social media, according to the report.
Artificial intelligence has a firm grip on society. It offers a simple avenue to our favorite content. The important question to ask is: Who is benefiting the most from artificial intelligence? While many would claim that it is the consumer, other entities have their hands in the pot.
“We need to regulate AI and bots, they’re going to participate,” Bedoyan said. “There are pros and cons to this. It’s great to have products and ideas tailored to you, but it’s dangerous when we produce echo chambers for ourselves.”
These bots are prolific all over social media and provide just as many benefits as they do downsides. Bots make it easy for the general population of an area to get access to information, but they also allow for significant amounts of bad information to be spread.
“It’s very likely our community is infested,” Bedoyan said. “Large student populations don’t regulate their feeds. They trust friends on social media too much.”
Now more than ever, we need to be conscious of what we are sharing, the trustworthiness of our sources of information and our understanding of the world. If we want to understand this problem we can’t be ignorant about the world.
Fake news, bots and trolls target those who are ignorant. The people who are the most at risk are usually those who don’t have a great breadth and depth of understanding of the world around them. If we can educate ourselves, we can find the truth and promote knowledge over deception.