April 3rd, 2019
Lauri Penttilä
Valossa Labs Ltd.
Helsinki, Finland
Recent horrible events in New Zealand have raised the question whether anything can be done to stop the deadly live broadcasts before their impact hits broader audiences on a global scale. The amount of suffering the victims and their families have to endure is amplified by the global reach of these streams and undue notoriety is afforded to the perpetrators. It is clear that these streams must be stopped as close to real time as possible. The faster these streams can be taken offline, the less it will add to the suffering of the victims, and less people are exposed to this traumatizing content. Legislation is already in motion in Europe for regulating dangerous content: Germany is leading the charge against harmful content with swift and decisive penalty for companies failing to remove hateful content from their services in 24 hours. Also the EU is proposing measures against illegal online content through its Digital Single Market initiative. The problem is that 24 hours with this type of extreme content is a lifetime.
The incident in Christchurch has once again brought attention to the responsibilities of online platforms and the media is reporting that the social media and hosting services are doing their best, but they cannot cope with monitoring every live stream, in real-time. Letting a record of his heinous act live forever via online platforms while spreading his message of hate, was, without a doubt, the intention of the attacker.
At Valossa our recent focus has been on developing AI-powered video recognition solutions for detecting and flagging harmful content for immediate manual review. We started with cinematic material to comply with many different cultures and regulations, but tragedies such as recent events are what motivates us to push forward on what is the best way to use modern AI technology for the world’s benefit. We disagree with the argument that social media platforms are making, that the technology to automatically detect harmful content from live feed does not exist, because it does.
After analyzing the Christchurch video, we conclude that video artificial intelligence can detect the firearms immediately from the eruption of violence, enabling rapid takedown by the curation process. Allowing AI to flag content for human moderation rapidly from live stream can give a human moderator more time to react and take harmful and dangerous streams down.
We believe that AI can be used for good and preventing the spread of such hateful violence is a prime example of how. We foresee that governments will be taking more regulatory action in the near future and we hope that the social platforms will follow, with the aid of technology that is already deployed.