TBS Newsbot

Facebook to fight violence live streams with AI, but is passing the buck a solution?

The number of ultra-violent streams of Facebook has been able to grow because of the justifications we make. However, if we allow an AI to make the decision is that a solution?

 

 

Under the plastic nonsense of Facebook sits a rather grim undercurrent. We see vague traces of it when it broadcasts the worst of humanity. We’ve all seen it, in the recorded murder of partners, the recorded murder of the self and murder of others. It is murder most foul spread across the most palatable of platforms.

While Facebook has used human moderators in the past, they’re relying more on algorithms that flag these posts, however, they’re looking to push it further. Facebook already has plans to develop its own AI hardware — a chip that hosts the AI. The biggest advantage of a chip? They require much less computing power, which would lead to much faster algorithms, according to Bloomberg.

As it stands, the filters can catch violent videos in about ten minutes as a best-case scenario, although sometimes they stay on the site for hours. Ideally, they’d like to immediately remove these streams as they occur, and while the chip might move them closer to that goal, that seems a ways off.

The system plays off an old rule, where the media holds back the details of serial killers to curtail the audience they might gain, and social traction they may reap. For further learning check out this NPR transcript, or just watch Network.

The problem with Facebook, is how they’ve handled these instances in the past, as they’ve been caught between responsibility and free speech. They offer a free platform, but the implementation of that freedom is where they struggle. For instance, when police officer Jeronimo Yanez shot and killed Philando Castile in 2017, Castile’s girlfriend filmed the event on Facebook Live. Facebook dithered on the footage, first deleting, then re-uploading it with graphic content warnings before ultimately removing it again.

Facebook’s logic was that videos raising awareness of violence would be allowed, against the obviousness of the horror. Perhaps then, if the AI makes the decision to remove all videos of violence on the grounds of violence and not the subtleties we can justify, perhaps that might be the way forward.

…unless the AI learns the samee subtleties we know.

Related posts

Top