Facebook announced Friday that it had removed 82 Iranian-linked accounts on Facebook and Instagram. A Facebook spokesperson answered VOA’s questions about its process and efforts to detect what it calls “coordinated inauthentic behavior” by accounts pretending to be U.S. and U.K. citizens and aimed at U.S. and U.K. audiences.
Q: Facebook’s post says there were 7 “events hosted.” Any details about where, when, who?
A: Of seven events, the first was scheduled for February 2016, and the most recent was scheduled for June 2018. One hundred and ten people expressed interest in at least one of these events, and two events received no interest. We cannot confirm whether any of these events actually occurred. Some appear to have been planned to occur only online. The themes are similar to the rest of the activity we have described.
Q: Is there any indication this was an Iranian government-linked program?
A: We recently discussed the challenges involved with determining who is behind information operations. In this case, we have not been able to determine any links to the Iranian government, but we are continuing to investigate. Also, Atlantic Council’s Digital Forensic Research Lab has shared their take on the content in this case here.
Q: How long was the time between discovering this and taking down the pages?
A: We first detected this activity one week ago. As soon as we detected this activity, the teams in our elections war room worked quickly to investigate and remove these bad actors. Given the elections, we took action as soon as we’d completed our initial investigation and shared the information with U.S. and U.K. government officials, U.S. law enforcement, Congress, other technology companies and the Atlantic Council’s Digital Forensic Research Lab.
Q: How have you improved the reporting processes in the past year to speed the ability to remove such content?
A: Just to clarify, today’s takedown was a result of our teams proactively discovering suspicious signals on a page that appeared to be run by Iranian users. From there, we investigated and found the set of pages, groups and accounts that we removed today.
To your broader question on how we’ve improved over the past two years: To ensure that we stay ahead, we’ve invested heavily in better technology and more people. There are now over 20,000 people working on safety and security at Facebook, and thanks to improvements in artificial intelligence we detect many fake accounts, the root cause of so many issues, before they are even created. We’re also working more closely with governments, law enforcement, security experts and other companies because no one organization can do this on its own.
Q: How many people do you have monitoring content in English now? In Persian?
A: We have over 7,500 content reviewers globally. We don’t provide breakdowns of the number of people working in specific languages or regions because that alone doesn’t reflect the number of people working to review content for a particular country or region at any particular time.
Q: How are you training people to spot this content? What’s the process?
A: To be clear, today’s takedown was the result of an internal investigation involving a combination of manual work by our teams of skilled investigators and data science teams using automated tools to look for larger patterns to identify potentially inauthentic behavior. In this case, we relied on both of these techniques working together.
On your separate question about training content reviewers, here is more on our content reviewers and how we support them.
Q: Does Facebook have any more information on how effective this messaging is at influencing behavior?
A: We aren’t in a position to know.
…