Social chaos: What we know about Russian social media election ads so far

Social chaos: What we know about Russian social media election ads so far

Why it matters to you

Whether or not the ads swayed political opinion, they’ve already sparked protests and changes among major social media platforms.

Russian interference in the U.S. election using social media ads is no longer just speculation. Facebook, Twitter, and Google have testified before Congress this week on multiple occasions in regards to accounts based in Russia that purchased politically motivated ads leading up to the 2016 presidential election. While the three networks are still gathering data on just how much impact those Russian social media election ads made, the companies have already uncovered a number of different facts on those ads — here’s what social media users need to know about the data.

The interference crosses multiple platforms with a wider reach than first estimated

Facebook, Instagram, Twitter, and YouTube all have data indicating some political ads leading up to the election were purchased by Russian organizations. Facebook ads had the largest audience with sponsored posts reaching an estimated 126 million Americans. The Facebook-owned Instagram also had 120,000 posts with Russian links, though it’s unclear how many users saw those posts.

The reach of Russian trolling ads is also much wider than originally thought. Facebook originally said 3,000 ads were purchased by Russian trolls, with a reach of around 10,000, but now that number is 80,000 ads and a 126-million reach, though that new data also encompasses non-paid posts, images, and events.

On Twitter, at least 2,752 accounts and over 36,000 bots sharing political posts were connected to Russia. Twitter says, however, that only o.74 percent of election-related Tweets came from those accounts, getting just 0.33 percent of impressions out of all the political Tweets during between September 1 and November 15, 2016.

Google says that one group spent $4,700 on search and display ads during the election, though none of those ads were targeted toward specific states or political interests. YouTube had 1,108 English-language videos from 18 Russian trolling accounts, though not all of those were political and only three percent saw upward of 5,000 views. The company didn’t find any related Google+ ads in English, though there were some written in Russian.

Platforms are already making changes as a result of the interference

While the impact of the ads isn’t yet fully understood, the ads have already sparked changes that users will begin to see rolling out on social media platforms. Facebook and Twitter will soon start labeling political posts, including who paid for those posts. While ads on TV, radio, and in print are required to have that “paid for by” spiel, ads online and on social media don’t fall under the same regulations. The bi-partisan Honest Ads Act aims to bring online ads up to the same regulations, but it still needs to make it all the way through the law-making process.

While the “paid for by” label will be easy to see, social media companies are also making changes behind the scenes. During its quarterly conference call with investors, Facebook CEO Mark Zuckerberg said that the platform’s efforts to enhance security will cut into the company’s profitability. The company will be doubling the 10,000 employees handling safety and security, along with expanding AI programs for automatically flagging suspicious activity — though that change isn’t just for spotting inauthentic political ads.

Many ads were difficult to pick up as fakes

Many of the ads had few clues indicating that the source was from outside the United States. The names used were often misleading, and the largest group behind those ads is simply called the Internet Research Agency. On Twitter, where usernames can be pretty much anything, one account linked to Russian trolls pretended to Tennessee Republicans as @Ten_GOP, and even members of the Trump administration retweeted some posts from that account. Twitter user handles among the list of known troll accounts also included regular names and misspellings of celebrities like “ashleysimpsn”

Not all of the ads targeted the 2016 election directly, but could have had an effect nonetheless. As the New York Times points out, the Internet Research Agency created Back the Badge and Blacktivists, two groups on opposite sides of the issues during the Black Lives Matters movements. These groups weren’t necessarily election-related but could have been designed to spark chaos and unrest, the Times suggests. Others called for immigration reform and support for the second amendment.

Spending wasn’t as high, but reach was still wide

The Internet Research Agency spent $46,000 on Facebook ads, while Trump and Clinton together spent $81 million on the platform. While it’s unclear if other Russian groups were behind some of those other ads, spending by Russian trolls was significantly less than U.S. political groups and candidates.

Advertising reach doesn’t always translate into an exact number of impressions for the same amount spent. When an ad gets more engagements in the form of likes and comments, that ad will reach more people than an ad with the same budget but fewer interactions.

While it may be impossible to determine if ads had a direct effect, they did spark actions in real life

Establishing a number and estimated reach is one thing, but there isn’t a way to determine if (or how many of) those ads actually swayed voters to change their candidate. While the outcome of the Russian interference may never be completely uncovered, some of those ads have already proven to spark a reaction. Republican Senator Richard Burr of North Carolina shared during the November 1 hearing that ads from the Russian-backed Heart of Texas and United Muslims of America pages sparked a protest in Houston, Texas, with about 12 anti-Muslim protesters and over 50 countering that protest.

Ads are only part of the problem

While ads and sponsored posts are a main focus of the investigation, organic reach played a role as well. During the hearing, it was revealed that Russian-backed Facebook pages used non-paid posts to spread misinformation about a number of hot-topic issues. On Twitter, bots helped spread the reach of organic tweets.

Uncovering, then banning troll accounts isn’t so cut and dried

Social media is often forced into a tight spot between preventing abuse and hindering free speech. RT (formerly called Russia Today), a Russian international TV network with a YouTube channel, is still an active account despite being named the Kremlin’s “principal international propaganda outlet.” Google says RT doesn’t violate any community guidelines and that the content is also available on cable and satellite.

The 80,000 ads Facebook removed were taken down not because of what was in them, but because the account creators misrepresented who they are. After the first presentation to Congress a month ago, Facebook said that its advertising guidelines help prevent abuse without inhibiting free speech.

Spotting abuse on a platform with 2.1 billion monthly active users is also a challenge, as evidenced by more than just political ads but inappropriate ad targeting, which Facebook recently apologized for. Facebook uses a mix of AI algorithms and human reviewers, and says they will be expanding both to improve effectiveness.

There’s could be more to uncover

While a few dozen Russian-backed ads have already been shared with the public, the House intelligence committee says they are working to release all the findings after removing personal data from them. With the full release of the information yet to come, and social media companies continuing to work on solutions, social media users can expect to see more changes stemming from the investigation as well as legislation like the Honest Ads Act.

Editor’s Recommendations

Published at Sat, 04 Nov 2017 10:15:40 +0000

Be the first to comment

Leave a Reply

Your email address will not be published.


*