Why the advertising boycott Facebook does not change

Companies no longer want to advertise on Facebook because there is hardly any hate or agitation. And Mark Zuckerberg? Is not impressed. There are solutions.

Facebook founder Mark Zuckerberg doesn’t care about the criticism of his social network. It doesn’t matter; he made it clear again last week: More than 900 companies and self-employed people have announced as part of the Stop Hate For-Profit initiative that they no longer want to advertise on the social network. Mainly because Facebook is doing so little against the hate that is widespread there: because, for example, it has abandoned calls for violence against the Black Lives Matter movement. Because it classified the very right-wing Breitbart website as a trustworthy source despite its contacts with white nationalists.

Among the signatories are large US companies such as Coca-Cola, Starbucks, Verizon, and Dax companies such as Henkel, SAP, and Volkswagen. Even if there are substantial advertising budgets behind these companies, Zuckerberg seems to be taking it easy. He said internally, “My guess is that all of these advertisers will be back on the platform soon.” It is more of a reputation than a financial problem for his company—a verbal middle finger towards the boycott.

Facebook can afford this toughness. The advertising boycott should hit the social network at its most sensitive point: its source of income. This point of attack initially seemed to make investors nervous: when Unilever, one of the world’s largest companies, agreed to support the initiative on June 26, the share price fell dramatically. Facebook’s goodwill just declined by $ 56 billion. But the shock lasted only a few days. The share price has long since recovered and is back at the pre-boycott level.

The economic ban could have a similarly minor impact on sales. Because a boycott only works if many participate. Nine hundred companies may sound like many. But firstly, the list of boycotting advertising partners has long included not only large companies with correspondingly substantial advertising budgets but also a yoga studio from Bridgeport in Connecticut or a law firm from Houston, Texas. Second, 900 companies are negligible for Facebook, which claims to work with more than seven million advertising partners. If a few drop out, there will be plenty of other companies that will pay instead of advertising space.

So much cosmetics must be

Of course, Facebook reacted a bit anyway; it’s not just about the money, but also about the image. The social network has blocked the accounts, pages, and groups of an extreme right-wing system in the USA. If Facebook classifies a post as relevant for the message and sees a violation of its own hate rules, it wants to provide it with a warning in the future. And it wants sources, that is, the latest news on which other media base their reports are displayed earlier in the newsfeed. But that remains cosmetic interventions, like so many Facebook measures in recent years. Because they all deal with the outgrowth of problems instead of tackling them at the root. Disinformation? Facebook is currently fighting by working with fact-checkers worldwide and flagging problematic websites. Hate on the net? Although users can report, they often stop.

Facebook would have to intervene much earlier. And one wonders what has to happen so that it finally does it.

To count on it, however, one has to assume that Facebook can change at all. But what if that’s not possible? If Facebook is irreparable with its attention and advertising logic? Journalist Chris O’Brien poses these interesting questions in US tech magazine VentureBeat. Facebook’s problems are not merely the result of a reticent executive, even if it has made the situation worse, he writes. Facebook had 3.2 billion fake accounts between April and September 2019 alone deleted – that is more than the 2.4 billion monthly active users of the service. Still, it feels like nothing has happened. The problem lies in the “nature of the beast” itself, writes O’Brien. 

Facebook not only maps, Facebook weights

Facebook is not the cause of the problems, “Facebook holds a mirror up to society,” writes, however, Nicky Clegg, formerly UK Vice Premier and now Vice President of Communication on Facebook, in a blog post. Everything good, wrong, or ugly is expressed by users on Facebook, Instagram, or WhatsApp. There is, of course, a pure essence in this statement: All the anger and hatred, the quarrels and lies for which we are so happy to blame the first communication in the social network appear almost everywhere where people meet. At the regulars’ table, differentiating voices are always more complicated than screaming necks with a clear opinion, regardless of whether they are analog or digital. Family members can spread false information while drinking coffee as well as online. And women,

But what Clegg omits: The Internet is changing how many people reach individual voices. Through social networks like Facebook, hatred, disinformation, manipulation can multiply. How many likes something gets, how often we see a request to speak has an impact on how strongly we approve and how important we perceive it. A group that might be quickly identified as a few scattered spinners in analog life can give excessive sharing and posting on the Internet that it represents a relevant social camp. This fundamental phenomenon can be found on all major platforms. On Youtube. On twitter. On Twitch. And also on many small ones.

However, Facebook has an individual responsibility because the company has a strong influence on the reception of news by many people. Because the social network does not merely depict what is happening anyway. The network weighted. If a post gets a lot of likes and comments, there is a high probability that it will be prominently flushed in the news feed by other users. If a person interacts with the content of a news source, there is a higher chance that the content will be displayed prominently again and again – which can, at some point, affect the perception of reality. Facebook picks out very carefully what is shown to whom, what might be of interest to whom. It is not a mirror of society; it is a mirror of our attention. Or better: a mirror of the benefits that Facebook ascribes to us to keep us on its pages as long as possible. We get as many ads as possible and that as much money as possible can be earned with our attention.

The Wall Street Journal recently reported an internal Facebook survey that the company’s algorithms take advantage of because the human brain is attracted to polarizing content with the potential for social division. What did you change afterward? Little. A senior Facebook manager argued, according to the Wall Street Journal. The attempts to make exchanges in the network more civilized are paternalistic. Another internal Facebook presentation already showed in 2016 that the social network could undoubtedly play a problematic role in the radicalization of its users: According to the study by two out of three users who joined an extreme Facebook group in Germany had this Facebook algorithm recommended.

The more people receive news on social media, the more important it is how Facebook is weighted. The company has been repeatedly accused of preferring individual opinions to others. Conservatives, in particular, like to complain that their voices are hidden – which they see as a form of censorship. Technically, that would be possible, but there is no evidence of this.

This discussion raises the fundamental question of what we expect from internet platforms in general and social networks in particular. The boundaries of what is still considered free speech and what is not are different in every country worldwide. Could you find universal rules that go beyond Facebook guidelines? And even if: Should private companies watch over free speech? About what is sayable and what is not?

Facebook likes to say that you don’t want to get involved in the content. There are “disagreements about what is called hate and should not be allowed,” said Zuckerberg in a speech at Georgetown University in 2019. A technology company shouldn’t decide the truth. However, Facebook’s role as a mere bearer of messages has long been refuted because the network is actively involved in shaping what may and may not remain on its platform. Not just about algorithms. A Washington Post report indicates that Facebook has repeatedly changed its internal rules since 2015 in such a way that disinformation and agitation, expressed by the then-presidential candidate Donald Trump, did indeed allow them.

Twitter – Trump shares video with racist slogans a video shared by the US President on Twitter, one of his followers shouts “White Power” several times. After sharp criticism, Trump deleted the video.

Facebook told the Wall Street Journal that it was no longer the same company as it was in 2016, citing evidence to suggest policies to prevent infringing content or research into how the platform impacted society. Only that seems to have changed little in the result. The New York Times columnist Kevin Roose recently evaluated which ten US Facebook pages generated an unusually large number of interactions, i.e., comments, shares, and likes, on a specific day. For Thursday, eight pages fell from people and organizations known for hatred and agitation – including right-wing preacher Franklin Graham, radical conservative Ben Shapiro, U.S. President Donald Trump, and media like Breitbart. The fact that this list looks quite similar on the previous days not only refutes Trump’s recurring claim that the platform suppresses conservative voices. It also illustrates that hatred and agitation are still triumphant on the platform. 

Time for something to revolve around user engagement

Facebook has proven in the past that it can contain unwanted effects and developments. With ever new changes to the newsfeed algorithm, it has almost eliminated business models based on exaggerated headings. In the same way, one would like the company to act against hatred and agitation. However, to do this, it would have to tweak its perhaps most crucial key figure: user engagement. This measure measures how many comments, likes, or views a post has. After almost a decade and a half of social media testing, we found that people like Pavlovian dogs interact particularly strongly, the strident the opinion. The more polarizing, the more hateful.

But the higher the user engagement, the higher the likelihood that the post will also be shown to many other users, the higher the possibility that others will also like, comment on or share it, and in turn, more even users will be flushed into the news feed. As long as the algorithm works in this way, it will also be used by extreme groups.

Are we realistic: The protest by advertisers will not change anything. Even if the 900 companies now want to fight against hatred and agitation: Facebook interactions have become an excellent metric for everything in online marketing, which companies will hardly want to do without. The time to time is flaring Delete Facebook -actions, users, and users to ask them to delete their Facebook account to help any more. Not even the petition from users, which has now started the Stop Hate For-Profit initiative. And just leaving it up to Facebook to recognize and act on the problem would be more than naive.

The call for stricter regulation sounds just as naive. Because there will only be solutions for individual countries, individual regions, and until politics comes to Potte, there can be completely different problems. But it remains the only plausible solution. And the General Data Protection Regulation can give you a little hope. It came late and was only intended as a European solution. At the beginning of the year, however, California passed a law that at least grants Californians similar rights. If you find a proper regulation for hate online, it would undoubtedly be in demand elsewhere.

Who made this hashtag big?

Tens of thousands tweeted during the Black Lives Matter protests on #DCBlackout about a network failure that never happened. A lesson on the power of uncertainty.

Marcie Berry sounds alarmed. “Currently NO INFO comes from D.C.,” she tweets on June 1 at 3:26 a.m. about the Black Lives Matter protests in front of the White House. “No streams, no posts, pictures or videos. Everything stopped at the same time.” Berry sees this as the next step in the escalation by the police: “They start killing and try to hide it with jammers.” It provides the tweet with the hashtag #DCBlackout. 

Berry’s Twitter profile picture shows the comic drawing of a young white woman with long, purple hair. Your account has a little over 200 followers. She is one of the first to report alleged internet disruptions in the U.S. capital. But then the hashtag #DCBlackout spreads extremely quickly on Twitter – and with it the tale that U.S. authorities are responsible for the Internet being paralyzed in the capital. Presumably to act against the protesters who have been besieging the White House for days and have caused Donald Trump to hide in a bunker. 

When Alex Engler wakes up in Washington, DC, a few hours later and looks at his Twitter timeline, he is amazed. He sees a flood of tweets that use the hashtag #DCBlackout to report that nothing is digital anymore in his hometown. But Engler’s Internet works flawlessly. It quickly became clear to him that it was a disinformation campaign. Engler has been researching this topic for years, first in Chicago, now in the Brookings Institution think tank in Washington. He takes a closer look at who is spreading the news of internet blackout in the U.S. capital. He notices that many accounts that spread the rumour have only recently been created. Most of them have never tweeted before or only very little.

“These are clearly fake accounts”  twitters Engler at 8:14. At this point, however, he no longer has a chance against the supposed message from the shutdown of the Internet. Because 35,000 accounts had already sent more than half a million tweets with the hashtag #DCBlackout – and thus promoted it in Twitter’s trending topics.

Bot or not?

Elsewhere on Twitter,  debates quickly flared up as to whether the rumour of internet blackout in D.C. was spread and spread by so-called bots. These are social media profiles, behind which there is not a human, but a machine. In contrast to bots that officially retweet specific hashtags or do funny things, social bots hide that they are operated mechanically.

Since the U.S. election campaign in 2016 and the Brexit vote in the U.K., some have been concerned about the impact it can have on discussions in social media if such machine-fed accounts speak up in debates, spread hashtags, and intensify tweets. Argue scientists and I.T. experts bitter about the role of these robots accounts: While some are convinced that they can massively affect discussions on Twitter – and thus undermine democratic processes, others are confident that it spread social bots that systematic misinformation does not exist. Or that they don’t play a central role. 

Because it’s complicated. Marcie Berry, the Twitter user who launched the story of the nightly disappeared Internet in DC: Is there a social bot behind her account? A real person? Or anything in between? What is spread by man and what by machine, what is covered with honest intentions and what is used to troll and control democratic decision-making and emotional movements is often confusing and difficult to distinguish. Even bot researchers and their classification tools often fail to classify correctly ( SSRN: Rauchfleisch, A. et al., 2020 ). Twitter and Facebook are now doing some work to ban inauthentic accounts from their platforms. However, this also proves to be difficult in practice. 

On the hashtag #DCBlackout, you can learn a lot about how rumours are spread and spread in social networks these days. About the role social bots play and which professional trolls, gullible users, who spread nonsense unchecked, and the sophisticated control of disinformation campaigns. 

As soon as researcher Engler expressed his suspicion of fake accounts on June 1, he was promptly accused of falling for the government’s strategy himself. “I saw no evidence that there was a blackout,” Engler argues. The answer: “The fact that there is no evidence is exactly the point.” There were said to be no tweets from protesters in D.C. for six hours.

Such distrust is hardly surprising. Evidence of brutal police violence in the U.S. is spreading almost daily through social media. Black Lives Matter activists have long trusted the state and its authorities to use such methods – the perfect breeding ground for spreading a rumour that seems angry to people who are upset. “These are people who understandably have no trust in authorities at all,” says Engler. “People who distrust experts are more likely to misrepresent information.”

Just before Berry and others started tweeting about the #DCBlackout that night, dramatic pictures from Washington appeared on Twitter. They showed beating police officers and burning government buildings. Photos that can be seen as another piece of the puzzle are to be taken up in the narrative of a police force that blocks the Internet to escalate. That the burning government buildings screenshots from the Netflix series Designated Survivor were, it turned out much later – when the suspicions were already sown.  

Indications of coordinated disinformation

Engler is convinced that someone has taken advantage of the already heated atmosphere surrounding the protests in Washington. As yet unknown people had programmed bots, automatic accounts that retweeted tweets with the hashtag #DCBlackout. 

This is not only supported by the fact that Engler claims that he has seen some visible bot accounts in action: Twitter had also become active in the early morning hours of June 1 and had automatically identified and deleted many accounts as bots. And the platform’s detection systems are usually correct, at least when it comes to purely automatic reports, says communication scientist Darren Linvill, who checked Twitter’s bot detection systems. 

“Real bots are technically easy to identify,” he says. Of course, you have to use more features than many of the non-working bot detection programs, which often use the tweet density as a decisive factor. But there are quite stable features that he does not want to name so as not to tell the attackers how they could better camouflage themselves. 

Linvill examined an extensive data set of accounts suspended from Twitter in 2018, including three million tweets from 3,800 accounts, which among others were accused of being part of a Russian disinformation campaign. His result: Twitter rated the profiles correctly as social bots. “Only a dozen were real people. An extremely low error detection rate,” says Linvill.  

Darius Kazemi of the Mozilla Foundation, who is usually sceptical when it comes to attributing bots to activities in social media, is also sure in this case: There is a coordinated disinformation campaign behind #DCBlackout.

Most of the social bots that Engler said were used were not particularly complicated: They were primarily used to disseminate content related to the hashtag #DCBlackout. But this is the first step in the manipulation. “Because if you as a user see that such a tweet has many interactions, you tend to trust the content,” says Engler. Also, Twitter’s algorithms jump to high-reach hashtags – and continue to popularize the topic – unless, as happens with #DCBlackout, the platform actively prevents it. 

Technically, it is straightforward to program corresponding social bots, says Engler. He describes the # DCBlackout campaign as “rather culturally sophisticated” – because rumours were cleverly spread and deconstructed. 

There is not much else known about the communication campaign – or maybe it will never find out. For example, whether the attackers launched the rumour themselves or just hijacked a misinterpretation from the activist scene. But Linvill and Engler see evidence that they had planned the course correctly. 

The fact that Twitter had already deleted numerous accounts early in the morning of June 1 further fueled the discussion. Because it strengthened the theory of some users that critical voices would be suppressed. And any attempt to make it clear that it was a hoax seemed just as suspicious.

Another phenomenon that Engler observed points in a similar direction. On the morning of June 1, a user sent him a screenshot of numerous other accounts that declared the blackout to be misinformation. “Can I have your opinion, then, please?” She asks. Continue. They all write that they live in the region themselves, know people who commute to D.C. They all use the same wording. “This hashtag looks like misinformation,” the tweet. “Stop unsettling people.” Always the same story. Still the same text. For Engler, there is no doubt that this is also an organized campaign. But this time to correct the previous disinformation. But: what sense should that make?

A question that can be answered quickly if you look at the effect. When Engler first saw these tweets, which were trying to correct them, many activists had already formed their opinion. Her reading: Now “the powerful” use social bots to suppress tweets about the blackout. The confusion of what is real and what is not is perfect. 

If one assumes that there is a campaign behind the entire hashtag, then an effect has been exploited here, which researchers have been discussing for some time: The liar’s dividend means that it is not only about the public spreading of lies is about sowing doubts about facts, but also about undermining the credibility of authentic instances. Around #DCBlackout, those who exposed the rumour were additionally discredited as untrustworthy – since apparent bot accounts spread their arguments. Means: The actual trolls behind the hashtag helped to present their story – and only made it more credible. 

“A brilliant Russian move.”

“Creating this level of doubt is a brilliant Russian move,” says disinformation researcher Linvill. However, he means this more generally – with a specific assignment, he is careful who is behind the inauthentic accounts around the # DCBlackout hashtag. The hashtag may have been strengthened on behalf of Russia, but other forces are also interested in destabilizing the situation in the United States. “So it could also be a copy of a Russian strategy that has been successful in the past.” A strategy that was primarily about spreading insecurity and sowing mistrust, thereby weakening trust in democracy. Linvill repeatedly observed this before the 2016 U.S. election. Russian Internet Research Agency paid numerous people to spread disinformation about U.S. politics on Twitter.

However, Linvill currently sees a turnaround in the attackers’ strategy – away from pure social bots to “cyborg accounts” as he calls them: accounts that are initially controlled by machines and then by people. Behind it could be, for example, people who in Troll factories in Russia or elsewhere distribute messages and hashtags either manually or under machine control. 

But these, too, says Linvill, should not be overestimated – they only intervene in a very targeted but efficient way to reinforce topics that were often already there. “Conspiracy theories mainly spread organically, there are people with strange views, and when they are reinforced by the Twitter algorithm, it looks inauthentic, but can be real. That makes our work a lot more difficult.”

If you look at the Twitter history of Marcie Berry – the woman who spread the # DCBlackout rumour so early – you conclude that the account is more likely to be a person than a machine. She is believed to be an angry protester, shocked by police violence, who writes a lot about the protests on Twitter. But maybe she is also a young man who causes trouble in the United States somewhere on behalf of a state or organization, and who developed Marcie Berry as one of many so-called personas: as a coherent personality on Twitter, the looks as authentic as possible and can hardly be revealed as a fake.

Here, too, the liar’s dividend comes into play. In the end, only someone who knows Marcie Berry personally can clarify with certainty whether there is an authentic person behind her account or not. According to Engler, this is also part of the attackers’ plan: “They want to spread uncertainty. If we can no longer trust each other, they will have achieved their goal.”

According to Engler and Linville, hacked accounts were also used to reinforce the distribution of #DCBlackout: profiles that real users had been using for years, but to which they lost access due to identity theft. “There is a black market with hacked accounts for such cases,” said Engler. The researchers also had Darius Kazemi investgate by had contacted the owners of these accounts. 

However, Kazemi is satisfied that the social bots involved were only used for reinforcement; in this case, not to formulate content actively: training artificial intelligence on it is tough. Engler agrees: “Machine language production is not yet good enough to produce credible content. A few sentences, yes, but an entire misinformation campaign?” 

But above all, it is not necessary: ​​To create uncertainty and to weaken trust in democracy, neither sophisticated artificial intelligence nor the dreaded DeepFake videos are required. A few paid trolls and many simple social bot accounts that don’t even pretend to be human are enough – and they just retweet existing rumours. But above all, people are needed where disinformation falls on fertile ground and resonance. That should worry us more.

Successful permission marketing using the example of push notifications

With permission marketing, the users addressed decide that they want to receive company messages. The channel plays an important role here, and push notifications are a new option here.

Smartphone chat notifications concept

Introduction

Messenger WhatsApp or services such as Facebook, Instagram, and Snapchat have fundamentally changed communication: Smaller “communication snacks” are in demand, but with a higher frequency. Everyone is also struggling for attention on this virtual stage.

This also increases the demands on marketing communication: it should appeal, inspire, inspire, involve, and establish an emotional connection. Above all, it’sits about increasing customer loyalty. In my view, permission marketing is an excellent tool for this.

What is permission marketing?

In principle, permission marketing is an old hat, which marketing guru Seth Godin already shaped in 1999. He was already guaranteed that traditional forms of marketing, such as TV advertising, i.e., invasive or disruptive marketing, would lose effectiveness because consumers want more control over the information they consume.

Consumers want to decide for themselves from whom and via which channels they receive content. According to Godin, the company must “earn” accordingly to get in touch with the customer. And: there must be the possibility to be able to “unlatch” again at any time. In this way, consumers received valuable news for them, making them feel connected to the company. 

“Permission-based marketing is the privilege (not the right) to send expected, personal, and relevant advertising to people who want to receive it.” ” –  Seth Godin

Permission marketing recognizes the new power of consumers to ignore marketing. At the same time, it includes the idea that respectful treatment is the best way to attract attention. Because “real” permission differs fundamentally from the implicit or behavioral agreement, as is typical but illegal practice with cookie consent. 

This is not the only reason why permission marketing is back in fashion. The most significant advantage that permission marketing has over traditional forms of marketing is the higher engagement rate. After all, a permission marketing campaign is about maintaining a long-term relationship based on trust. It is not aimed at achieving immediate results, but rather at winning regular customers and strengthening brand loyalty.

Permission marketing is not aimed at achieving immediate results, but rather at winning regular customers and strengthening brand loyalty.

The newer permission marketing channels, therefore, combine the style of private exchange with professional marketing communication. This opens up the chance of a more intense bond between customers and prospects through more conventional and modern approaches. Every marketing professional knows that interested people can be more easily moved to specific actions. Because the engagement is high, the conversion rates in permission marketing are higher than in other forms of marketing.

Additional advantages:

  • Increase in success: With permission marketing, companies reach those who expressly request it. This automatically increases engagement and conversion rates.
  • Strong customer loyalty: With permission marketing, companies can advance their content distribution and bring users back to their website. 
  • Legal certainty: With consent, many data protection problems no longer play a role. 

Permission marketing through push notifications

Permission marketing is often equated with newsletter marketing. Email and SMS marketing is still thriving but complex to implement and not always the best choice. In my view, push notifications as a contemporary form of in-app messaging have great potential.

There are four types:

  • Web push: On a website, visitors can agree to receive messages. If this is the case, you will then receive news directly on your (lock) screen. Clicking on the word takes you to the target page.
  • App push: For many notifications from apps, a connection via API to back-end systems makes sense. For example, information about travel bookings and the like can be sent. Also, any actions in the apps or more extended inactivity can trigger such notifications.
  • Messenger push: WhatsApp uses 58 million Germans every day. Unfortunately, the messenger can hardly be used for advertising marketing communication. Therefore WhatsApp Notifications are only interesting for 1: 1 interaction and targeted messages such as reservation confirmations or appointment reminders. The same applies in principle to Facebook Messenger. 
  • Wallet Push: With Wallet Push, the website invites iOS visitors to add a “News Card.” After adding the card, recipients can receive messages. At the same time, the wallet card can be updated with offers, codes, and much more. It is also possible to add QR codes, loyalty functions, or geo-fencing. It is reasonable and sensible to use wallet push and web push simultaneously on one website.

Special case wallet push

Wallet push is of particular importance for another reason: after all, iOS users cannot be reached via web push, but this group makes up around 20 percent of the mobile market. Depending on the website, the proportion of users can be significantly higher, for example, 30 to 40 percent.

Wallet Push also proves to be a valuable marketing tool in stationary retail. After all, retailers are finding it increasingly challenging to bring physical bonus cards to men or women. If this is done digitally, there are new advantages. For example, it is possible to have the messages appear automatically on the lock screen at a specific time or a specified location, change their appearance and content, and send push notifications. The wallet provider can trigger the update of the card and the associated messages.

These notifications are an ideal tool to keep customers up to date and provide them with targeted offers. They can also be used to remind you of appointments or to send personal messages.

Examples of a push notification on a laptop and a smartphone

Push communication: advantages for consumers and marketers

In general, more than 10% of all consumers now agree to push notifications – and the trend is rising. This is not the only reason why technology offers many advantages for marketers:

  • One-click is enough: In contrast to newsletters, customers do not have to enter personal data such as email addresses. A simple confirmation is enough. For this reason, the opt-in rate is particularly high. 
  • Immediate communication: Push messages are transmitted immediately and directly on the screen. This reduces the risk of going under and not paying attention. This results in high opening rates of 90 percent. In comparison, emails only reach 25 percent.
  • Traffic boost: Relevant push messages have the potential to increase the traffic of websites significantly. 
  • Budget-friendly: In contrast to ads, SMS, or messenger services, there are no costs for sending or clicking.
  • High conversion rate: Push messages to achieve click and response rates, just like the best times for email newsletters.
  • GDPR-compliant: Since no personal data flows, marketers do not have to worry about the General Data Protection Regulation. 

Best practices for setting up web push campaigns

Web push notifications can be created and sent quickly. However, they require a well-thought-out plan for maximum success, i.e., the highest possible recipient base and consent rate. The design and continuity of the appropriate shipping frequency are just as important as the relevance and attractiveness of the individual notifications. 

Planning should begin by considering which exact goals are being pursued with push notifications and how to push marketing should be integrated into the overall communication and content strategy. This includes an audit of existing and planned content from all other marketing channels such as newsletters and blogs. Do you already have a content production and marketing action plan? Then this should serve as the basis for promoting the content and actions contained therein with push notifications even more.

Whether B2B or B2C – numerous notification occasions and hangers can be found for every industry. Here are some ideas to take advantage of the full range of options for push notifications:

  • Notices of new content such as blog articles, white papers, videos, etc.
  • Invitations to participate in events such as online seminars or store openings
  • Product and assortment news: For software and cloud services information about new releases, eCommerce notifications about new, re-available, currently trendy articles or top sellers.
  • Promotion of special offers and discounts

Some event occasions for which notifications are available are apparent because they are directly related to the sender’s core business: the upcoming Mother’s Day for flowers or the impending onset of winter for tire dealers. However, with a little imagination and ingenuity, many other occasions are conceivable as hangers for notifications: from the birthday to the time change to the lunar eclipse. It is important to note that the event and information must always match the product or offer and its users.

The keys to successful push notifications

Subscribers can expect useful, relevant notifications such as educational tips, exciting news, or attractive deals. At the same time, the recipients want to be addressed emotionally. Different announcements that arouse curiosity are easy to grasp, refreshing to read or helpful, and scored particularly well. Also, Humor often comes good.

Here are more tips. 

  • Timing: The opt-in invitation should not appear immediately upon entry, but only after a short period (a few seconds later). An additional call via exit-intent shortly before leaving the website can be a successful strategy to keep in touch with users. Last but not least, the opt-in can also be linked to order, registration, or the like. In general, it makes sense to play out the opt-in dialog on as many pages as possible and to only do so on individual pages, such as the career section of your website.
  • Time: When addressing private individuals, Saturday and Sunday noon have proven to be particularly successful periods for sending messages. In the business environment, however, Tuesday to Thursday, early in the morning or in the evening, are more promising. It is recommended to test different days and times to find the ideal times for push messages.
  • Dialogue type: By default, the opt-in invitation is placed on the actual website. So the user can only continue surfing when he interacts with the element. Other overlay options for opt-ins are message bars that appear above or below the website and therefore do not cover the actual site and sliders that enter from one side. Opt-in triggers can also be integrated into the pages themselves as an element.
  • Optical design: Ideally, dialogues should be adapted to the look & feel of the website or your corporate design.
  • Textual design: There are two basic strategies for texting, which also affect the graphic options: either a more factual text that is very much based on standards of browser dialogs, or a deliberately more creative approach that comes across as high advertising. In general, however, the following applies: short and concise sentences with a clear call-to-action are preferable. It is also essential to highlight the advantages for users.
  • Incentivizing opt-ins: It is common in e-commerce for newsletters: visitors to the shops are offered a discount voucher when they register for the newsletter. Such an additional incentive for opt-in is also possible for web push notifications. 
  • Freedom of choice: It is also helpful to let the user “subscribe” to decide on which topics or occasions he would like to receive information and not only to accept this based on the tracked behavior. This is particularly important when there is a very complex bouquet of offers. 
  • Personalization: With permission marketing, companies address people who are interested in their products or services. It can, therefore, be assumed that the news is highly relevant. Personalized content that fits the current position within the customer journey or the person’s interests and preferences is particularly attractive. Personalization is particularly capable of increasing the utility value by sending offers and tips to match the items you are currently viewing or buying.
  • Variety: experimentation is required both in the design of opt-in invitations and in the news. The range is also needed for events and content. It’sIt’s not just about bargains, it’also sits about entertainment, a look behind the scenes and much more. 

Conclusion

Push notification marketing is the channel that creates a direct path to users. Used correctly, this form of permission marketing promises rapid integration, high conversion rates, and increased customer loyalty. It is essential to understand that segmenting subscribers and customers into different areas is vital nowadays due to different interests.

Those who know their target group, understand them, and send proper notifications to combine valuable content for the target group with their marketing messages—a win-win situation for companies and customers alike.

Mr. Zuckerberg, take responsibility! Or why Facebook can never be an independent platform that provides users with the ultimate truth

Washington and Brussels call for far-reaching regulation for Facebook: Above all, it is the opinion of the world’s largest social network that politicians are suspicious of. But with increasing political intervention, the risk increases that Facebook will become an organ controlled by the state and special interests.

Don’t let the politicians lie – fact-checking is a must

It was only a matter of time before social networks themselves would become the subject of the US election campaign. Because at some point, a politician would demonstratively exceed the limits of what is still permitted on platforms such as Twitter or Facebook according to the company’s guidelines. And that has now happened: Twitter recently provided a tweet by the President of the United States about the susceptibility of postal elections to fraud with a “fact-checking label” and thus offered users further information on the subject, some of which was contrary to Trump’s post. Facebook even had a number of its posts deleted, which dealt with the problems of the left-wing militant network Antifa and which had used symbols that were also used by the National Socialists.

President Trump sees the measures as an interference with freedom of speech and wants to prevent such interference with the content published on the platforms in the future with the recently issued executive order “Protection against online censorship”. Meanwhile, leading employees of Facebook, parts of the advertising industry, many politicians and parts of the population of Facebook and Twitter are demanding exactly the opposite: “Intervene more in the content, don’t let the politicians lie – fact-checking is a must”, says they argue.

The current regulation of social networks in the United States

In general, a dispute has arisen in the USA about how social networks should deal with content. It is hardly surprising that there is no consensus in this polarized country about what is only keen rhetoric or perhaps a call for violence, and that in the hot climate a post is quickly referred to as a false message or disinformation, even though the content is only controversial. The Americans will elect their President in five months. And the political establishment is shaking with the influence that Facebook, Twitter and Youtube could have on the outcome of the elections. Whether politicians, lobbyists, civic associations, trade unions, advertisers or employees: in the end, they all want to be in charge of what can be posted on Facebook, Twitter or Youtube. Private opinion now needs state borders – social media regulation is required. Such efforts are also being made in Brussels.

But instead of regulation that offers politicians a gateway, it needed framework conditions that strengthen responsibilities and market mechanisms. Do we want the political influence on such essential opinion platforms like Facebook or YouTube to increase? Which opinion will be considered acceptable on Facebook in the future, if discussions about topics like racism, environmental protection or American politics often threaten the emotional, moralizing club of opinion?

What was needed rather than the influence of moralizing opinion leaders on Facebook would be a competition-promoting regulation that, first of all, transferred to Mark Zuckerberg what an entrepreneur had to carry: responsibility. Secondly, it would be a matter of adjusting the business model, in which the user would finally have to become a customer (today he is the product). And thirdly, any regulation should start from a responsible and courageous media consumer and strengthen them.

The current regulation of social networks in the United States dates back to the childhood days of the Internet. Long before Mark Zuckerberg founded Facebook, the American legislature had already created the ideal legal conditions for its success. At that time, in 1996 – Zuckerberg was just eleven years old – the Communication Decency Act (CDA) was passed in the USA. It was the time of the first Internet service providers (ISP), and the main aim was to prevent pornographically but also other inappropriate content from poisoning the climate on the Internet. Article 230 decoupled the right to intervene from the obligation to be ultimately responsible for the remaining content:

According to popular belief, these 26 words created the Internet. They became the legal business foundation of Facebook, which was founded in 2004. The social network is still a neutral platform on which users post their content. The company assumes responsibility for this to a minimal extent by referring to Article 230. There are no limits to Facebook’s growth: Today, several hundred million posts are posted on Facebook every day; 2.8 billion people access the social network every month.

Facebook, like a publisher, has to take responsibility for what is seen on its pages is

However, Facebook generally only has to assume responsibility for what happens on the platform in exceptional cases. And if you don’t want that at all, you’d better retreat to the position of neutrality. Facebook’s impartiality or independence is an illusion. Of course, the company intervenes massively in the news feed – this is the stream of content that reaches the user on his own Facebook page. Facebook’s algorithms control what the user gets to see when and how often. Your goal is to keep the user on the page as long as possible. Because the longer he is on the page, the more advertising Facebook can show him. The more Facebook can learn from the user (via his posts, the likes distributed by him, the Internet pages that he uses when he has long left Facebook has, etc.), the more expensive it is to sell advertising space to advertisers. Ultimately, the user is the product that Facebook sells to the advertising industry.

The group is increasingly countering the criticism that is increasingly permeating Facebook that it is a platform for false information and campaigns for disinformation, with self-regulation: General rules of conduct, fact-checkers and a newly established oversight board are intended to walk the tightrope walk between freedom of movement resulting from the CDA law Manage opinions and exceed the limit of what is permissible. Facebook now pays a whole army of external, supposedly objective fact-checkers. They check controversial posts for their truthfulness. Depending on the result, Facebook removes such positions, or the algorithms ensure that they slide far down in the newsfeed and become virtually invisible. And the twenty-member oversight board, which is staffed by external experts is to decide in disputes and doubts whether the content has been rightly removed or marked with comments from Facebook.

Self-regulation

But this type of self-regulation is hugely problematic. First, Facebook is getting very close to publishing. Second, it is based on the false assumption that there is only one truth and that facts are separate from opinion. However, an allegedly objective assessment of a controversial claim does not necessarily refute the latter.

Against this background and given the increasing attempts by politicians to exert institutional influence on Facebook and its content, it would be time to create clear responsibilities: Facebook, like a publisher, has to take responsibility for what is seen on its pages is. This responsibility would mean Facebook’s duty to remove illegal content. It would also give the company the freedom to edit, classify (according to fact or opinion), curate and select content according to its style. That would massively change the face of Facebook today. Facebook could no longer hide behind the illusion of neutrality, and users could no longer be deceived by it. It would probably be assumed that Facebook would become less attractive as an advertising platform for the advertising industry. Users might be asked to checkout. But with that, they would finally become a product and a customer, and Facebook would become their product, not an advertiser.

At the same time, competition in the social network market should finally be stimulated: it should be possible to switch from Facebook, including all contacts and connections – the so-called social graph – to another social network. Just as in the mobile phone market the right to take the phone number to another provider that made the competition play properly, the portability of the social graph should be guaranteed. The design of the various social networks would then be guided more by user demand and not by politics and interest groups, as is to be feared in the future.

But the social media user cannot avoid one thing: he must act as a responsible and courageous media consumer. It is not brave to arbitrarily topple historical statues from the pedestal in virtual rooms, to ban opinions that deviate from the mainstream or to raise demands for the protection of minorities or the environment to a quasi-religion. It is also not brave to indulge in the illusion that there could be a Facebook that will provide users with the ultimate truth as an independent platform. Responsible citizens dare to deal with the ideas of dissenters on social networks, and they can do it rationally and critically.

How to write comments that bring visitors (but don’t make you a spammer)

I had to laugh.

A few days ago, I saw a funny photo, and I found the statement particularly apt.

In the photo, we see two older women chatting on the street:

I only dare to go out on the road. It has become too bad on the Internet.

Sometimes it is. People scold, swear, troll, and annoy so much on the Internet that the street is already safer than the Facebook timeline.

On the Internet, many people seem to be losing their manners.

You can see this on blogs too: comments that are pure spam, that offend or that are subliminally arrogant.

Comments are an excellent tool for getting new visitors – if you do it right, because almost every blog offers you the opportunity to leave your URL when you comment.

If the readers find you attractive, click on your link, and you have won a new reader.

Another feature of comments is almost as important: they create a relationship between you and the blogger. Usually, the blog comment is your first encounter with another blogger. So you should make a good impression.

But how do you write good comments?

1.Have a face

Have you ever had a date when you showed up in a Spiderman costume? Or as a Duckwin Duck with a cape and mask?

Of course not.

People with a mask are immediately suspicious. They are up to something or hide something under their cape. You don’t trust such a person.

It’s no different on the Internet.

If you comment, there should be a face – your face.

As a gravatar, do not use childhood heroes or other “masks”. You shouldn’t just use the gray silhouette, which is set by default.

Take a photo where you can sleep well and look beautiful and use it as a gravatar.

Everything else just makes people suspicious – and suspicious people don’t click your link.

2. Have a name

On the Internet, many people like to hide behind a pseudonym.

If it were legally possible, many would even like to blog with a pseudonym – but this is not possible due to the obligation to provide an imprint.

I am not a dating expert, but if you introduce yourself as “Master Yedi” on your date, then I am sure that the meeting will go wrong – unless you are lucky and your counterpart does not label yourself as schizophrenic.

Therefore, use your real name when commenting.

And no, please do not use your domain as a name: “Harry from CoachingDeluxe.de.”

That looks spammy. It looks like you want to put your domain in the comment as often as possible.

So don’t do that.

Just do it: “Harry”.

There is a separate field for the domain.

3. Don’t scatter links.

Links in comments are like fire: they can keep you warm and bring joy, but you can also burn your fingers on them.

So you should treat links to your blog very carefully.

My advice: leave it.

Sure, you may be able to contribute something to the discussion, but no matter how good the link is, it always leaves an impression on the blogger:

He just wants to spread his links.

Maybe some readers have this impression too. It is tough to post a link in the comment without acting like a spammer. And spammers immediately leave an unpleasant aftertaste.

Above all, you should refrain from comments such as: “Cool, I also wrote something on this topic [Link].”

Sincerely: who cares? You join the “me too” shouts at a flea market. You can be happy if the blogger unlocks your comment at all.

If you want to arouse interest and trust, leave the scattering of links in the comment field entirely.

There are better ways to get backlinks.

4. Don’t stink

Self-praise stinks, even on the Internet.

Some people love their voice so much that they can tell you for 15 minutes what they had breakfast and how tenderly the butter melted on their tongues.

Unfortunately, you can also see self-praise in blog comments. And to clarify: self-praise has lost nothing.

How to read comments like:

  • “I’ve been implementing all these tips for years. That’s why I have the leading blog in my niche. Have a look; there you can learn something. “
  • “Nothing new in this post, I’ve been doing everything for years.”
  • “Oh yes: I can make a good living from my blog income and only paid the deposit for my Mercedes SLK yesterday.”

My thought with such comments: Nice for you. You get an order. Pat your shoulder three times.

Seriously: hold yourself back in a comment with self-praise. Otherwise, you are immediately unappealing.

Oh yes: mockery, destructive criticism and sarcasm also stink.

5. Read the article

Yes, this is not a matter, of course.

This phenomenon is often seen on Facebook, where people only see the headline of the article, but are already diligently writing hate comments on Facebook or otherwise adding their mustard.

If you don’t want to go through as a mustard slingshot, then show with your comment that you’ve read the article.

So don’t write: “Great article.”

But: “I loved the example with the mustard. That made the problem so clear to me. “

Do you see the difference

Be as specific as possible and refer to the article – the more specific, the better, because then you can also start a dialogue.

6. Show appreciation

People love recognition.

And when you give others credit, it has a significant effect: people love you too.

Imagine your circle of friends, and a “new” comes into the round. He criticizes each of your friends, enumerates quick facts, and makes an effort to stand in a great light.

Then a second newcomer comes around, and he gives praise. He sincerely praises your friends’ shoes, their smiles and is interested in them.

Which person do you find more likable?

The second, of course. Those who give recognition also get recognition from others.

Therefore, you should always show appreciation in your comments. No, you shouldn’t crawl or slime in the other’s butt, but give honest praise.

As a reminder: When you put it in, you tell people what they want to hear. When you praise, you tell people what they don’t expect.

So your comment should contain a simple element: the compliment.

I don’t say that because I want you to praise me in the comments, but because it should be so. Compliments are the best way to make yourself accessible.

7. Increase the value

Now comes the coronation.

The most important aspect of a useful comment is that it offers added value. Ideally, your comment should increase the value of the article and not decrease it (as unfortunately, spam comments do).

You can deliver added value in the following variants:

  • Tell about personal experiences – Often, in a blog article, you only see one person’s skills. If you bring your expertise into the discussion, people will know that it works for others or that you can do it differently. A good experience report leads readers to this reaction: “Oh, he had the same problem as me. I’ll take a look at his blog. “
  • Ask meaningful questions – in your comments; you should ask questions that others might ask – then other readers will see the answer directly. You should also ask questions that are as specific and goal-oriented as possible. An excellent question is always: “How would you approach Problem X? I can’t get any further … “
  • Add a point – Many bloggers like to write list posts with a fixed number of points. If you can think of another, it would be the perfect material for comment. But please don’t be spiteful or with your big index finger. Just put your tip on the table. If you are lucky, the blogger will even add your comment to his article.

Improve the world

There are enough trolls and spammers on the Internet to make the web a place you don’t like to travel.

We can change that. We can start with ourselves and write comments that are not spammy. Comments that add value. Comments that give honest recognition and do not burst with self-praise.

In the end, only everyone can win: the blogger gets more comments. Good comments get you more attention. And you two enter into a dialogue.

What more do you want?