Artificial Intelligence

Is it possible for social media to regulate harmful content?

12th April 2019
Alex Lynn
0

Internet companies could be fined if they fail to tackle ‘online harms’, including hosting terrorist propaganda or images of child abuse. The ‘Online Harms White Paper’, which also proposes an independent watchdog and written ‘code of practice’ is a joint proposal from the DCMS and the Home Office.

Here, Tim Ensor, Director of Artificial Intelligence at Cambridge Consultants, has offered a note of caution: “Today’s white paper proposing regulation of social media platforms aims to put the burden of responsibility for hosted content onto tech companies, with the aim of reducing harmful online content. Policy makers are rightly aiming to balance considerations of freedom of speech and social harm. 

“However, consideration is also needed for how these responsibilities, if imposed, could be met. Facebook alone generates four petabytes of new data per day, which is equivalent to hundreds of thousands of minutes of video content. To moderate this content using human staff alone is impossible. The only feasible way to assess this content will be by harnessing artificial intelligence.

“Social media platforms are already using AI for this purpose, but in adding regulation, policy makers will need to be mindful that access to the most effective AI is not universal, and only the giants have the very best.

“Smaller internet companies will not have the same access to skilled engineers, computing power and underlying data to train AI systems that the global technology giants enjoy. Therefore, alongside any new obligations on the content itself, the government must consider how these obligations can be applied fairly across the sector.”

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier