We Rise by Lifting Others
We Rise by Lifting Others
Tiffany Miller · 02.11.2020

Improving content moderation online has become increasingly important, as social media platforms have become such an important source of information for so many people globally. To give an example of the scale of how much new content is being created at present, just look at YouTube. The video-sharing site has 1.8 billion active monthly users, and users play over 1 billion hours of content every single day. In each 24-hour day, another 576,000 hours of new content is uploaded to the site.

Companies such as Google (owner of YouTube), Facebook, and Twitter encourage users to upload what is known as User Generated Content (UGC), but in some cases, users will upload content that should not be hosted by the site — for various reasons, such as:

  • Copyright: Music, movies, or any other material that is protected by copyright and therefore should not be distributed.
  • Offensive: Most networks such as Facebook, Instagram, and YouTube have a strict policy against offensive material, especially pornography, and they will remove it immediately. Twitter is less restrictive, but will still react to complaints of offensive material.
  • Illegal material: Information on terrorism or activities such as human trafficking.

Imagine if a team of content moderators watched every new video uploaded just to YouTube alone. If they only worked eight-hour shifts, then you would need a team of 72,000 people watching new content constantly — that’s not even including breaks or other time at work not watching videos. Now add in all the other major social networks and you can see the scale of the problem for improving content moderation.

Most networks apply a reporting system so users can report dubious content. This means that something illegal or offensive may go live on the site, but only for a short time. Once a complaint is reported, the content will be taken offline and scheduled for moderation.

This could be a manual process — literally creating a stack of content that needs to be checked by a moderator — or an artificial intelligence (AI) system can be deployed. In most cases, where the company uses an AI, it will scan the video at the time of upload and immediately block or report the video before a human moderator even checks it.

This is increasingly becoming the only way that platforms supporting UGC can keep their users safe from offensive or illegal content. The combination of AI checks with user complaints ensures that most dubious content never gets published, and when it does slip through the net, it can be removed quickly because the content moderation team can focus only on content that has managed to pass the AI test.

AI is not perfect in this regard. Parodies, irony, and medical content are all hard to detect — computers don’t tell jokes — however, the AI is now good enough to detect and block most offensive content before it is published and causes complaints.

For more information on improving content moderation and how AI can be used to make moderation more effective, please leave a comment here or message me directly via my LinkedIn.