Companies no longer want to advertise on Facebook because there is hardly any hate or agitation. And Mark Zuckerberg? Is not impressed. There are solutions.
Facebook founder Mark Zuckerberg doesn’t care about the criticism of his social network. It doesn’t matter; he made it clear again last week: More than 900 companies and self-employed people have announced as part of the Stop Hate For-Profit initiative that they no longer want to advertise on the social network. Mainly because Facebook is doing so little against the hate that is widespread there: because, for example, it has abandoned calls for violence against the Black Lives Matter movement. Because it classified the very right-wing Breitbart website as a trustworthy source despite its contacts with white nationalists.
Among the signatories are large US companies such as Coca-Cola, Starbucks, Verizon, and Dax companies such as Henkel, SAP, and Volkswagen. Even if there are substantial advertising budgets behind these companies, Zuckerberg seems to be taking it easy. He said internally, “My guess is that all of these advertisers will be back on the platform soon.” It is more of a reputation than a financial problem for his company—a verbal middle finger towards the boycott.
Facebook can afford this toughness. The advertising boycott should hit the social network at its most sensitive point: its source of income. This point of attack initially seemed to make investors nervous: when Unilever, one of the world’s largest companies, agreed to support the initiative on June 26, the share price fell dramatically. Facebook’s goodwill just declined by $ 56 billion. But the shock lasted only a few days. The share price has long since recovered and is back at the pre-boycott level.
The economic ban could have a similarly minor impact on sales. Because a boycott only works if many participate. Nine hundred companies may sound like many. But firstly, the list of boycotting advertising partners has long included not only large companies with correspondingly substantial advertising budgets but also a yoga studio from Bridgeport in Connecticut or a law firm from Houston, Texas. Second, 900 companies are negligible for Facebook, which claims to work with more than seven million advertising partners. If a few drop out, there will be plenty of other companies that will pay instead of advertising space.
So much cosmetics must be
Of course, Facebook reacted a bit anyway; it’s not just about the money, but also about the image. The social network has blocked the accounts, pages, and groups of an extreme right-wing system in the USA. If Facebook classifies a post as relevant for the message and sees a violation of its own hate rules, it wants to provide it with a warning in the future. And it wants sources, that is, the latest news on which other media base their reports are displayed earlier in the newsfeed. But that remains cosmetic interventions, like so many Facebook measures in recent years. Because they all deal with the outgrowth of problems instead of tackling them at the root. Disinformation? Facebook is currently fighting by working with fact-checkers worldwide and flagging problematic websites. Hate on the net? Although users can report, they often stop.
Facebook would have to intervene much earlier. And one wonders what has to happen so that it finally does it.
To count on it, however, one has to assume that Facebook can change at all. But what if that’s not possible? If Facebook is irreparable with its attention and advertising logic? Journalist Chris O’Brien poses these interesting questions in US tech magazine VentureBeat. Facebook’s problems are not merely the result of a reticent executive, even if it has made the situation worse, he writes. Facebook had 3.2 billion fake accounts between April and September 2019 alone deleted – that is more than the 2.4 billion monthly active users of the service. Still, it feels like nothing has happened. The problem lies in the “nature of the beast” itself, writes O’Brien.
Facebook not only maps, Facebook weights
Facebook is not the cause of the problems, “Facebook holds a mirror up to society,” writes, however, Nicky Clegg, formerly UK Vice Premier and now Vice President of Communication on Facebook, in a blog post. Everything good, wrong, or ugly is expressed by users on Facebook, Instagram, or WhatsApp. There is, of course, a pure essence in this statement: All the anger and hatred, the quarrels and lies for which we are so happy to blame the first communication in the social network appear almost everywhere where people meet. At the regulars’ table, differentiating voices are always more complicated than screaming necks with a clear opinion, regardless of whether they are analog or digital. Family members can spread false information while drinking coffee as well as online. And women,
But what Clegg omits: The Internet is changing how many people reach individual voices. Through social networks like Facebook, hatred, disinformation, manipulation can multiply. How many likes something gets, how often we see a request to speak has an impact on how strongly we approve and how important we perceive it. A group that might be quickly identified as a few scattered spinners in analog life can give excessive sharing and posting on the Internet that it represents a relevant social camp. This fundamental phenomenon can be found on all major platforms. On Youtube. On twitter. On Twitch. And also on many small ones.
However, Facebook has an individual responsibility because the company has a strong influence on the reception of news by many people. Because the social network does not merely depict what is happening anyway. The network weighted. If a post gets a lot of likes and comments, there is a high probability that it will be prominently flushed in the news feed by other users. If a person interacts with the content of a news source, there is a higher chance that the content will be displayed prominently again and again – which can, at some point, affect the perception of reality. Facebook picks out very carefully what is shown to whom, what might be of interest to whom. It is not a mirror of society; it is a mirror of our attention. Or better: a mirror of the benefits that Facebook ascribes to us to keep us on its pages as long as possible. We get as many ads as possible and that as much money as possible can be earned with our attention.
The Wall Street Journal recently reported an internal Facebook survey that the company’s algorithms take advantage of because the human brain is attracted to polarizing content with the potential for social division. What did you change afterward? Little. A senior Facebook manager argued, according to the Wall Street Journal. The attempts to make exchanges in the network more civilized are paternalistic. Another internal Facebook presentation already showed in 2016 that the social network could undoubtedly play a problematic role in the radicalization of its users: According to the study by two out of three users who joined an extreme Facebook group in Germany had this Facebook algorithm recommended.
The more people receive news on social media, the more important it is how Facebook is weighted. The company has been repeatedly accused of preferring individual opinions to others. Conservatives, in particular, like to complain that their voices are hidden – which they see as a form of censorship. Technically, that would be possible, but there is no evidence of this.
This discussion raises the fundamental question of what we expect from internet platforms in general and social networks in particular. The boundaries of what is still considered free speech and what is not are different in every country worldwide. Could you find universal rules that go beyond Facebook guidelines? And even if: Should private companies watch over free speech? About what is sayable and what is not?
Facebook likes to say that you don’t want to get involved in the content. There are “disagreements about what is called hate and should not be allowed,” said Zuckerberg in a speech at Georgetown University in 2019. A technology company shouldn’t decide the truth. However, Facebook’s role as a mere bearer of messages has long been refuted because the network is actively involved in shaping what may and may not remain on its platform. Not just about algorithms. A Washington Post report indicates that Facebook has repeatedly changed its internal rules since 2015 in such a way that disinformation and agitation, expressed by the then-presidential candidate Donald Trump, did indeed allow them.
Twitter – Trump shares video with racist slogans a video shared by the US President on Twitter, one of his followers shouts “White Power” several times. After sharp criticism, Trump deleted the video.
Facebook told the Wall Street Journal that it was no longer the same company as it was in 2016, citing evidence to suggest policies to prevent infringing content or research into how the platform impacted society. Only that seems to have changed little in the result. The New York Times columnist Kevin Roose recently evaluated which ten US Facebook pages generated an unusually large number of interactions, i.e., comments, shares, and likes, on a specific day. For Thursday, eight pages fell from people and organizations known for hatred and agitation – including right-wing preacher Franklin Graham, radical conservative Ben Shapiro, U.S. President Donald Trump, and media like Breitbart. The fact that this list looks quite similar on the previous days not only refutes Trump’s recurring claim that the platform suppresses conservative voices. It also illustrates that hatred and agitation are still triumphant on the platform.
Time for something to revolve around user engagement
Facebook has proven in the past that it can contain unwanted effects and developments. With ever new changes to the newsfeed algorithm, it has almost eliminated business models based on exaggerated headings. In the same way, one would like the company to act against hatred and agitation. However, to do this, it would have to tweak its perhaps most crucial key figure: user engagement. This measure measures how many comments, likes, or views a post has. After almost a decade and a half of social media testing, we found that people like Pavlovian dogs interact particularly strongly, the strident the opinion. The more polarizing, the more hateful.
But the higher the user engagement, the higher the likelihood that the post will also be shown to many other users, the higher the possibility that others will also like, comment on or share it, and in turn, more even users will be flushed into the news feed. As long as the algorithm works in this way, it will also be used by extreme groups.
Are we realistic: The protest by advertisers will not change anything. Even if the 900 companies now want to fight against hatred and agitation: Facebook interactions have become an excellent metric for everything in online marketing, which companies will hardly want to do without. The time to time is flaring Delete Facebook -actions, users, and users to ask them to delete their Facebook account to help any more. Not even the petition from users, which has now started the Stop Hate For-Profit initiative. And just leaving it up to Facebook to recognize and act on the problem would be more than naive.
The call for stricter regulation sounds just as naive. Because there will only be solutions for individual countries, individual regions, and until politics comes to Potte, there can be completely different problems. But it remains the only plausible solution. And the General Data Protection Regulation can give you a little hope. It came late and was only intended as a European solution. At the beginning of the year, however, California passed a law that at least grants Californians similar rights. If you find a proper regulation for hate online, it would undoubtedly be in demand elsewhere.