By Hany Farid – In 2016, Facebook, Google, Microsoft, and Twitter announced that they would work together to develop new technology to quickly identify and remove extremism-related content from their platforms. Despite some progress, serious problems remain.
First, we need a fast and effective method to remove content. Once content has been identified, reported, and determined to be illegal or in violation of terms of service, it should be immediately removed (Prime Minister Theresa May is calling for a maximum of two hours from notification to take-down).
Fourth, we need to invest in human resources. While advances in machine learning hold promise, these technologies – as technology companies will admit – are not yet nearly accurate enough to operate across the breadth and depth of the internet. There are more than a billion uploads to Facebook each day and 300 hours of video uploaded to YouTube each minute of the day.
This means that any machine-learning based solution will have to be paired with a significant team of human analysts that can resolve complex and often subtle issues of intent and meaning that are still out of reach of even the most sophisticated machine learning solutions. more> https://goo.gl/X2ACdL