Google is under a lot of pressure to stamp out extremists’ online presences, and it’s responding to that heat today. The internet giant has outlined four steps it’s taking to flag and remove pro-terrorism content on its pages, particularly on YouTube. Technological improvements play a role, of course, but the company is also counting on a human element that will catch what its automated filters can’t.
To start, it’s pouring more energy into machine learning research that could improve its ability to automatically flag and remove terrorist videos while keeping innocently-posted clips (say, news reports) online. It’s also expanding its counter-radicalization system, which shows anti-extremist ads to would-be terrorist recruits.
It’s the stronger reliance on people that may matter the most, however. Google plans to “greatly increase” the number of humans in its YouTube Trusted Flagger program, improving the chances that it’ll catch terrorist material. It’s likewise working with anti-extremism groups to pinpoint recruiting-oriented content. Google wants to tackle those YouTube videos that are borderline, too — if it spots videos with “inflammatory” religious or supremacist material, it’ll put those clips behind a warning and prevent them from getting ad revenue, comments or viewing recommendations. In theory, this strikes a balance between free speech and public safety.
To some extent, the plans are an extension of Google’s ongoing efforts, such as its plan to pull ads from extremist videos. Still, they might just assuage politicians who have threatened to institute legal mandates for anti-extremist takedowns. Google, Facebook, Twitter and others have already stepped up their collective fight against terrorism, but this is a relatively concrete roadmap. The big question is whether or not all these initiatives will be enough. AI-powered flagging and greater oversight could help, but the sheer volume of videos on YouTube makes it entirely possible that some footage will slip through the cracks.
Source: Google