After Special Counsel Robert Mueller’s report found that Russians easily manipulated social media to influence the 2016 presidential election, tech experts and local governments are racing to find new, better ways to protect elections from manipulation and meddling in time for the 2020 race.

Now, two MIT professors say they have a way to help “harden democracy against future attacks,” but worry that the recently passed European privacy law, GDPR, and similar pending legislation in the U.S. (the DETOUR Act) could hinder efforts to systematically and empirically research social media manipulation and effects on voter behavior.

“Current regulatory regimes disincentivize the retention of this data,” they wrote. “For example, GDPR encourages firms to comply with user requests to delete data about them, including content that they have posted. It may be difficult for firms to accurately quantify exposures for users who deleted their accounts or were exposed to content deleted by others. We should recognize that well-intentioned privacy regulations, though important, may also impede assessments like the one that we propose.”

In their report, published this week in Science Magazine, Sinan Aral and Dean Eckles of the MIT Sloan Schsssssool of Management advocate for a ” four-step research agenda for estimating the causal effects of social media manipulation on voter turnout and vote choice.”

While neither Aral nor Eckles believe Russian interference decided the outcome of the 2016 election, limiting such interference and its effects is paramount to safeguarding America’s democratic processes and ensuring fair, informed elections.

Here are the four steps:

  1. Record “exposures to manipulation,” like seeing an ad on social media sponsored by a non-American entity, or seeing “false content intended to deceive voters, or even true content propagated by foreign actors, who are banned from participating in domestic political processes, with the intent of manipulating voters”;
  2. Combine and compare this exposure data to how voters behave at the polls;
  3. “Assess the effects of manipulative messages on opinions and behavior” by analyzing ” similar people exposed to varying levels of misinformation,” and
  4. “Compute the aggregate consequences of changes in voting behavior for election outcomes.”

For such a research project to succeed, Aral and Eckles said they will need a hefty amount of data. Fortunately, they said, social media companies like Facebook and Twitter can help.

“Facebook and Twitter constantly test new variations on their feed ranking algorithms, which cause people to be exposed to varying levels of different types of content,” they wrote.

But despite constant reporting from mainstream publications like The New York Times about how social media perpetuates “fake news,” a March 2019 study comparing social media users during the 2012 and 2016 presidential elections found that social media doesn’t have as big of an impact on voter behavior as liberals and political pundits might think.

“Social media had no effect on belief accuracy about the Republican candidate in [the 2012] election,” said R. Kelly Garrett, the study author and professor at the School of Communication at the Ohio State University. “The 2016 survey focused on campaign issues. There is no evidence that social media use influenced belief accuracy about these topics in aggregate, but Facebook users were unique.”

Not only that, Garrett said, but “survey research conducted immediately after the 2016 U.S. Presidential election estimated that the average American likely saw only a handful of verifiably false news stories during the campaign season, and that many who saw these messages were at least somewhat selective about which they believed.”

But that doesn’t mean Aral and Eckles’ research method isn’t valuable: experts still haven’t done enough research on how social media use affects voter behavior.

“Media are still influential, but their effects tend to be small and contingent on host of other factors,” Garrett said. “Even small changes in belief accuracy can have consequential downstream effects on political behavior, including vote choice.”

The MIT report comes soon after Facebook announced new, stricter rules for political advertisers in the U.S. as part of the company’s wider crackdown on fake news and political manipulation on social media. Currently, advertisers must put a “Paid for by” disclaimer on their Facebook ads, but Facebook said many advertisers “have attempted to put misleading ‘Paid for by’ disclaimers on their ads.”

“That’s why, starting mid-September, advertisers will need to provide more information about their organization before we review and approve their disclaimer,” the company said in a statement. “If they do not provide this information by mid-October, we will pause their ads. While the authorization process won’t be perfect, it will help us confirm the legitimacy of an organization and provide people with more details about who’s behind the ads they are seeing.”

In addition to providing Facebook with a U.S. address, phone number, business email and web address, advertisers will also have to provide either a tax-registered ID number, a “government website domain that matches an email ending in .gov or .mil,” or a Federal Elections Commission (FEC) ID number.

For smaller businesses, Facebook has a different set of rules. They can either submit their phone number, business email, U.S. address, and business website with a domain that matches the email, or they can “provide no organizational information and rely solely on the Page Admin’s legal name on their personal identification document. For this option, the advertiser will not be able to use a registered organization name in disclaimers.”

Facebook also plans to require political candidates and elected officials to use Two-Factor Authentication so Facebook can verify they actually live and work in the U.S.

Follow Kate on Twitter