A team from the Massachusetts Institute of Technology (MIT) has developed a system based on artificial intelligence to detect fake news, with a very high rate of accuracy.
On the way to the fight against online disinformation, a research team from the Artificial Intelligence Software Architectures and Algorithms group, attached to the Lincoln Laboratory of the famous MIT, set out to better understand the disinformation campaigns, which plague the Web and more particularly on social networks. And researchers are convinced: using artificial intelligence, they will counter the spread of fake news and identify those who are at the origin of it.
They could detect accounts relaying fake news with an accuracy of 96%
It all started in 2014 when Steven Smith, a member of MIT Lincoln Laboratory, embarked on a quest to better understand the rise of online disinformation campaigns. Smith and the other researchers wanted to study how malicious groups exploited social media. They had drawn their attention to accounts which they believed had unusual activity. They then seemed to push pro-Russian information.
Curious to know more and to engage in a large-scale practical case, the research team then requested additional funding to study the French presidential campaign of 2017, and thus to know if identical techniques could be used.
Using their program, dubbed RIO (for Reconnaissance of Influence Operations), the researchers collected a wealth of data on social media, in actual time, in the 30 days leading up to the election. Their aim was then to research and analyze the propagation of content assimilated to disinformation. In one month, they had compiled 28 million Twitter posts from one million accounts. With the RIO program, they could detect accounts relaying fake news with an accuracy of 96%.
AI RIO can detect robot accounts and those managed by humans
To be extremely precise, the RIO system combines several analytical techniques to create a sort of web of where and how disinformation news is disseminated. And specialists don’t just rely on the number of tweets or retweets from each account to judge the latter’s power of disinformation. “What we have found is that most times this is not enough. It doesn’t really tell you the impact of accounts on the social network,” says one of the RIO team members, Edward Kao.
Edward Kao has precisely developed a statistical approach used in the RIO program that allows to know if a social network account engages in disinformation, but also to what extent the account impacts the network, and to what degree said network amplifies the broadcast message. Another team member, Erika Mackin, used machine learning to study the behavior of accounts, to find out if they interact with foreign media, and the language they use. This approach allowed the RIO program to detect malicious accounts active both in the French presidential campaign of 2017 and in that of disinformation on Covid-19.
One of the significant advantages of the RIO program is that it can detect both bots and accounts managed by humans, which differentiates it from usual automated systems.
If today the Lincoln Laboratory is interested in the spread of disinformation news in European media, researchers in the RIO program are hoping for future use in the social media industry and even in some countries, at the government level. A new program, aimed at studying the psychological effects and behavior change caused by misinformation on users, is being developed by members of MIT.