Moderation & Livestreams
- Maria Allgaier
- Apr 11, 2024
- 3 min read
Moderating livestreams possess several challenges. Some of these include, but are not limited to:
It´s live. This is the most obvious difficulty. When the content is live it makes it extra challenging to moderate and or filter said content. Moderation systems and moderators must make fast decisions.
Volume. Livestreams tend to bring in a large audience, and with interactive chats increases the amount of user-generated content. This means that platforms don’t just have to moderate a large volume of sudden content, but they must moderate different content types including the video and live chats.
Contextual challenges. Moderating in real-time leads to an increased need and pressure to
Understand context such as cultural nuances and sarcasm.
Scale. On large online platforms there can be an enormous number of live streams happening simultaneously.
Trends. As most social media users know, there are a number of trends that occur on a daily basis. Platforms have to adapt to and understand these trends as they are happening.
Resource constraint. Given the sheet number of livestreams and associated user-generated content, it can be difficult for platforms to have the resources this is especially more difficult when it comes to livestreams.
False positives and negatives. Automated content detection systems may produce false positives. Achieving a balance to minimise these is ongoing.
AI classifier quality. Achieving accurate AI classifiers that are effective with low false positives in every aspect of moderation is difficult, especially when it comes to CSAM. That’s why at Orthus we have developed a best on market CSAM classifier to assist with livestream moderation.
It is evident that there are several challenges associated with livestream moderation. So, how do platforms approach livestream moderation and how can this be improved or change in coming years with the advancement of AI?
Moderating livestreams on online platforms involves a combination of things including, automated tools, AI, human moderators and reporting mechanisms. How livestream moderation is approach varies and depends upon the online platform. Here is how some of these mainstream approaches work:
1. Automated Content Detection
This approach used a combination of image and video recognition as well as audio analysis. These automated systems can use algorithms to analyse visual content in real-time.
2. Reporting
Livestream moderation can rely on users to report that they find the content to be illegal or harmful. This will escalate to a human moderator for review.
3.Word filtration
Livestreams consist of not just live video and audio but also often chats. Platforms may use keywords to filter and block specific phrases or words that may be inappropriate.
4.Warnings
Many platforms have begun using warnings and disclaimers to inform users about the content and if it may be sensitive.
5.User blocking, restriction, and muting
Some platforms give creators the option to restrict, mute, block or kick users out of their livestream. This is a way to stop inappropriate or disruptive behaviour.
6.Delays
Certain platforms have a delay in livestreams in order to give more time for content review.
7.Human moderators
Human moderators will be sent livestreams that are reported or flagged as well as random sweeps.
As one can see there are many approaches that online platforms can use to tackle livestreams. There is room for improvement and ease. This can be done by investing in AI classifiers that are custom to livestreams. In fact, T3k/Orthus was recently awarded the EVAC grant in order to explore how our AI classifiers can be used to help make livestreams safer. Orthus plans on using its variety of top of the market classifiers, as well as best in market CSAM classifier to help online platforms tackle livestreams with speed and accuracy.
Comments