Big tech cannot crack down on online hate alone. We need to fund the smaller players

This article is brought to you thanks to the collaboration of The European Sting with the World Economic Forum.

Author: Melina Sánchez Montañés, Managing Director, Innovation Fund at Alfred Landecker Foundation


  • Online hate can be difficult to identify, given its new and sophisticated forms.
  • Tech companies should move beyond detection and moderation of offensive and defamatory content toprotect impacted communities.
  • More catalytic funding is needed to accelerate early-stage technologies that promise to combat disinformation, hate and extremism in novel ways.

Are you one of the billion active TikTok users? Or are you rather the Twitter type? Either way, chances are you have come across hateful content online.

Hate speech starts offline – and can be accelerated by threats to society. COVID-19 is one such example: the pandemic has fuelled a global wave of social stigma and discrimination against the “other”.

Not surprisingly, anti-Semitism, and more largely racism, is on the rise. A study conducted by the University of Oxford unveiled that around 20% of British adults endorse statements like “Jews have created the virus to collapse the economy for financial gain” or “Muslims are spreading the virus as an attack on Western values.”

The internet is where these beliefs can become mainstream. As the new epicenter of our public and private lives, the digital world has facilitated borderless and anonymous interactions thought impossible a generation ago.

Unlike the physical world, however, the internet has also provided a medium for the exponential dissemination and amplification of false information and hate. And tech companies know it. In 2018, Facebook admitted that its platform was used to inflame ethnic and religious tensions against the Rohingya in Myanmar.

As the lines between online and offline continue to blur, we have a tremendous responsibility: to ensure a safe digital space for all. The opportunity lies in deploying catalytic funding to innovative technologies that combat disinformation, hate and extremism in novel ways.

The butterfly effect of social media

Even if only 1% of Tweets contained offensive or hateful speech, it would be the equivalent to 5 million messages daily. It is not difficult to imagine the consequences of such virality – the Capitol siege on 6 January painfully exemplifies how social media can incite violence so quickly.

Unlike violent extremism, hate speech is often subtle or hidden in between terabytes of content uploaded to the internet every day. Pseudonyms like “juice” (to refer to Jewish people) or covert symbols (like the ones in this database) feature frequently online.

They are also well-documented by advocacy organisations and academic institutions. The Decoding Antisemitism project, funded by the Alfred Landecker Foundation, leverages an interdisciplinary approach – from linguistics to machine learning – to identify both explicit and implicit online hatred by classifying secret codes and stereotypes.

However, the bottleneck is not how to single out defamatory content, but how to scan platforms accurately and at scale. Instagram offers users the option to filter out offensive comments. Twitter acquired Fabula AI to improve the health of online conversations. And TikTok and Facebook have gone so far as to set up Safety Advisory Councils or Oversight Boards that can decide what content should be taken down.

With just these efforts alone though, tech companies have failed to spot and moderate false, offensive or hateful content that is highly context, culture and language dependent.

The dark holes of disinformation

Online hatred is ever-evolving in its shape and form to the extent that it becomes increasingly difficult to uncover. Facebook is only able to detect two-thirds of altered videos (also known as “deepfakes”). Artificial Intelligence (AI) and Natural Language Processing (NLP) algorithms haven’t been quick enough in cracking down on trolls and bots that spread disinformation.

The question is: did technology fail us, or did people fail at using technology?

Identifying altered videos through machine learning Image: Defudger.com

The 4Ps of combating online hate

Unless tech companies want to play catch-up on a constant basis, they should move beyond detection and content moderation to a holistic and proactive approach to how hatred is generated and disseminated online (see chart below).

Such an approach would have the following four target outcomes:

Promoting diversity and anti-bias: Technologies can be designed and developed in an inclusive manner by engaging those who are affected by online hate and discrimination. For example, the Online Hate Index by the Anti-Defamation League uses a human-centered approach that involves impacted communities in the classification of hate speech.

Preventing discrimination and violence: Certain tech designs, like recommendation engines, can accelerate pathways to online radicalisation. Others promote counter-speech or limit the virality of disinformation. We need more of the latter.

The re-direct method employed by social enterprise Moonshot CVE channels internet users who search for violent content towards alternative narratives.

Protecting vulnerable groups: Tech platforms have primarily focused on semi-automated content moderation, through a combination of user reporting and AI flagging. However, new approaches have emerged. Samurai Labs’ reasoning machine can engage in conversations and stop them from evolving into online hate and cyberbullying.

Prompting civic engagement: By ensuring that every voice is heard and that citizens – especially younger ones – are empowered to engage in civic discourse, we can help build more resilient societies that don’t revert back to harmful scapegoating. In the US, New/Mode makes it easier for citizens to affect policy change by leveraging digital advocacy tools.

How is hatred generated and disseminated? Image: The author

Funding early-stage innovation

The common denominator of the 4Ps is that, with the help of technology, they address the root of the problem.

Indeed, it is no longer tech for the sake of tech. From Snapchat’s Evan Spiegel to SAP’s Christian Klein, dozens of CEOs signed President Macron’s Tech for Good call a few months ago.

Beyond pledges, companies are embracing technology’s potential to be a force for good by setting up mission-driven incubators, like Google’s Jigsaw, or by allocating funds to incentivise fundamental research, like WhatsApp’s Award for Social Science and Misinformation.

In conversations with various research organisations, such as the Institute of Strategic Dialogue (ISD), I have learnt firsthand about the demand for tech tools in the online hate and extremism space. Whether it is measuring hate real-time across social media or identifying (with high levels of confidence) troll accounts or deepfakes, there is room for innovation.

But the truth is that the pace and the various ways in which tech is being used to incite and promote online hatred is faster and more intricate than what companies can preempt or police. If the big platforms can’t solve the conundrum, we need smaller tech companies that will. And that is why catalytic funding for risky innovation is key.

Step aside big tech

With the increase of hatred due to COVID and the higher demand for new solutions, it is only natural that funders and investors become interested in scaling for-profit and non-profit early-stage tech tools. Established venture capital players like Seedcamp (investor in Factmata) and companies like Google’s Jigsaw are starting to bridge the gap between supply and demand to combat disinformation, hate and extremism online.

But we need more. And so I invite you to join me.

  • Online hate can be difficult to identify, given its new and sophisticated forms.
  • Tech companies should move beyond detection and moderation of offensive and defamatory content toprotect impacted communities.
  • More catalytic funding is needed to accelerate early-stage technologies that promise to combat disinformation, hate and extremism in novel ways.

Are you one of the billion active TikTok users? Or are you rather the Twitter type? Either way, chances are you have come across hateful content online.

Hate speech starts offline – and can be accelerated by threats to society. COVID-19 is one such example: the pandemic has fuelled a global wave of social stigma and discrimination against the “other”.

Not surprisingly, anti-Semitism, and more largely racism, is on the rise. A study conducted by the University of Oxford unveiled that around 20% of British adults endorse statements like “Jews have created the virus to collapse the economy for financial gain” or “Muslims are spreading the virus as an attack on Western values.”

The internet is where these beliefs can become mainstream. As the new epicenter of our public and private lives, the digital world has facilitated borderless and anonymous interactions thought impossible a generation ago.

Unlike the physical world, however, the internet has also provided a medium for the exponential dissemination and amplification of false information and hate. And tech companies know it. In 2018, Facebook admitted that its platform was used to inflame ethnic and religious tensions against the Rohingya in Myanmar.

As the lines between online and offline continue to blur, we have a tremendous responsibility: to ensure a safe digital space for all. The opportunity lies in deploying catalytic funding to innovative technologies that combat disinformation, hate and extremism in novel ways.

The butterfly effect of social media

Even if only 1% of Tweets contained offensive or hateful speech, it would be the equivalent to 5 million messages daily. It is not difficult to imagine the consequences of such virality – the Capitol siege on 6 January painfully exemplifies how social media can incite violence so quickly.

Unlike violent extremism, hate speech is often subtle or hidden in between terabytes of content uploaded to the internet every day. Pseudonyms like “juice” (to refer to Jewish people) or covert symbols (like the ones in this database) feature frequently online.

They are also well-documented by advocacy organisations and academic institutions. The Decoding Antisemitism project, funded by the Alfred Landecker Foundation, leverages an interdisciplinary approach – from linguistics to machine learning – to identify both explicit and implicit online hatred by classifying secret codes and stereotypes.

However, the bottleneck is not how to single out defamatory content, but how to scan platforms accurately and at scale. Instagram offers users the option to filter out offensive comments. Twitter acquired Fabula AI to improve the health of online conversations. And TikTok and Facebook have gone so far as to set up Safety Advisory Councils or Oversight Boards that can decide what content should be taken down.

With just these efforts alone though, tech companies have failed to spot and moderate false, offensive or hateful content that is highly context, culture and language dependent.

The dark holes of disinformation

Online hatred is ever-evolving in its shape and form to the extent that it becomes increasingly difficult to uncover. Facebook is only able to detect two-thirds of altered videos (also known as “deepfakes”). Artificial Intelligence (AI) and Natural Language Processing (NLP) algorithms haven’t been quick enough in cracking down on trolls and bots that spread disinformation.

The question is: did technology fail us, or did people fail at using technology?

Identifying altered videos through machine learning Image: Defudger.com

The 4Ps of combating online hate

Unless tech companies want to play catch-up on a constant basis, they should move beyond detection and content moderation to a holistic and proactive approach to how hatred is generated and disseminated online (see chart below).

Such an approach would have the following four target outcomes:

Promoting diversity and anti-bias: Technologies can be designed and developed in an inclusive manner by engaging those who are affected by online hate and discrimination. For example, the Online Hate Index by the Anti-Defamation League uses a human-centered approach that involves impacted communities in the classification of hate speech.

Preventing discrimination and violence: Certain tech designs, like recommendation engines, can accelerate pathways to online radicalisation. Others promote counter-speech or limit the virality of disinformation. We need more of the latter.

The re-direct method employed by social enterprise Moonshot CVE channels internet users who search for violent content towards alternative narratives.

Protecting vulnerable groups: Tech platforms have primarily focused on semi-automated content moderation, through a combination of user reporting and AI flagging. However, new approaches have emerged. Samurai Labs’ reasoning machine can engage in conversations and stop them from evolving into online hate and cyberbullying.

Prompting civic engagement: By ensuring that every voice is heard and that citizens – especially younger ones – are empowered to engage in civic discourse, we can help build more resilient societies that don’t revert back to harmful scapegoating. In the US, New/Mode makes it easier for citizens to affect policy change by leveraging digital advocacy tools.

How is hatred generated and disseminated? Image: The author

Funding early-stage innovation

The common denominator of the 4Ps is that, with the help of technology, they address the root of the problem.

Indeed, it is no longer tech for the sake of tech. From Snapchat’s Evan Spiegel to SAP’s Christian Klein, dozens of CEOs signed President Macron’s Tech for Good call a few months ago.

Beyond pledges, companies are embracing technology’s potential to be a force for good by setting up mission-driven incubators, like Google’s Jigsaw, or by allocating funds to incentivise fundamental research, like WhatsApp’s Award for Social Science and Misinformation.

In conversations with various research organisations, such as the Institute of Strategic Dialogue (ISD), I have learnt firsthand about the demand for tech tools in the online hate and extremism space. Whether it is measuring hate real-time across social media or identifying (with high levels of confidence) troll accounts or deepfakes, there is room for innovation.

But the truth is that the pace and the various ways in which tech is being used to incite and promote online hatred is faster and more intricate than what companies can preempt or police. If the big platforms can’t solve the conundrum, we need smaller tech companies that will. And that is why catalytic funding for risky innovation is key.

Step aside big tech

With the increase of hatred due to COVID and the higher demand for new solutions, it is only natural that funders and investors become interested in scaling for-profit and non-profit early-stage tech tools. Established venture capital players like Seedcamp (investor in Factmata) and companies like Google’s Jigsaw are starting to bridge the gap between supply and demand to combat disinformation, hate and extremism online.

But we need more. And so I invite you to join me.

the sting Milestones

Featured Stings

Can we feed everyone without unleashing disaster? Read on

These campaigners want to give a quarter of the UK back to nature

How to build a more resilient and inclusive global system

Stopping antimicrobial resistance would cost just USD 2 per person a year

Here’s how data can help us fix the climate

Ferry capsizes near Mosul, UN chief offers solidarity, support ‘as needed’

3 lessons from India in creating equal access to vaccines

G20 LIVE: G20 Antalya Summit in Numbers, 15-16 November 2015

Acute food insecurity ‘far too high’ UN agency warns, as 113 million go hungry

Iraq: UN mission urges ‘maximum restraint’ following deadly attack on foreign troops

The US repelled EU proposals on common rules for banks

EU-Vietnam free trade deal gets green light in trade committee

Governments and non-state actors need to take urgent action to meet Paris Agreement goals

5 things we can do in 2021 that will protect the ocean and change lives

Carbon neutrality and funds for EU programmes are EP priorities for EU summit

Next six months crucial for the EU, says von der Leyen at the start of the German Presidency of the Council of the EU

Why the future of carbon should be blue

Iran: UN rights chief ‘deeply disturbed’ by continuing executions of juvenile offenders

How Sierra Leone is using 3D printing to become a model state

Confidence in the COVID-19 vaccine grows in UK and US, but global concerns about side effects are on the rise

Mental health in the pandemic: how to stay emotionally stable?

Pakistan-India crossing is a ‘Corridor of Hope’, UN chief says, wraps up visit with call for interfaith dialogue

London is planting a giant bee corridor to boost insect numbers

How storytelling can be a force for social change

Restrictions, unmet promises, unbridled violence in Sudan, a ‘recipe for disaster’, says Bachelet

Coronavirus: harmonised standards for medical devices to respond to urgent needs

CEOs in these countries are more likely to go with their gut

Court of Auditors: MEPs back five members

Trump-China trade war lingers upsetting global economy and stock markets

The world’s largest bus system is starting to go electric

4 ways sporting events are becoming more sustainable

Vaccine strategy: Budget MEPs quiz EU health chief Sandra Gallina

‘Bicycle Kingdom’ makes a comeback, as China seeks solutions to tackle air pollution crisis

EU-US relations on the dawn of the Trump era

Why COVID-19 is an opportunity to close the connectivity gap for refugees

Teenagers’ career expectations narrowing to limited range of jobs, OECD PISA report finds

Libyans continue ‘spilling their blood on the battlefield’ as fight for Tripoli rages on

The UK is on a record-breaking run of coal-free power

EU to negotiate an FTA with Japan

Business uncertainty rises as US grants only temporary exception to EU for steel and aluminium tariffs

Mine ban agreement ‘has saved countless lives’, but ‘accelerated efforts’ needed to end scourge for good: Guterres

EU-Turkey relations are at a historic low point, say MEPs

Why the agtech boom isn’t your typical tech disruption

EU and Indian flags at EU-India Summit in New Delhi last October (copyright EU 2018, Source: EC - Audiovisual Service)

India and the EU get close to revive talks on proposed Free Trade Agreement

World Retail Congress announces Dubai 2016 Hall of Fame Inductees

Galileo funding: A ‘small’ difference of €700 million

Car rentals: EU action leads to clearer and more transparent pricing

How COVID-19 is taking gaming and esports to the next level

Draghi tells the EU Parliament his relaxed policies are here to stay

The Free World Experience Report – LGBTQI+ health on the spot

UN chief welcomes re-opening of key Gaza border crossing

How tech can lead reskilling in the age of automation

European Youth Forum on Summit on Jobs and Growth

A supercomputer is helping to reduce traffic jams, saving time and money. Here’s how

‘Unprecedented terrorist violence’ in West Africa, Sahel region

The MWC14 Sting Special Edition

This Netherlands football stadium creates its own energy and stores it in electric car batteries

Joint EU-U.S. statement following the EU-U.S. Justice and Home Affairs Ministerial Meeting

EU Parliament: Deposit guarantee and trading platform transparency sought

Anti-terror measures against youngsters’ online posts ‘linked to spike in child detention globally’

Coronavirus: Commission proposes a Digital Green Certificate

The Eurogroup has set Cyprus on fire

Press conference by EC Vice-Presidents Valdis Dombrovskis (left) and Jyrki Katainen, on the Commission's proposals in the framework of the financial union (Source: EC Audiovisual Services / Copyright: EU, 2018 / Photo by Georges Boulougouris)

EU Finance ministers agree on new banking capital rules and move closer to Banking Union

More Stings?

Speak your Mind Here

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s