Robots vs. Racists: Is AI censorship really solving the problem?
This long-debated subject has been on our lips for decades, but in the last few years, we have seen organisations trying to monitor free speech more intrinsically - and it is happening. New emerging AI technologies, and companies across the globe are finding creative ways to keep harmful content away from our screens. But is this a good thing?
After the Euros 2020 attack on three of the English squad after a nail-biting penalty shootout against Italy, fans and the general population were left shocked at some of the racial abuse that followed in the coming days. Many conversations arose over the responsibility of the social media companies and whether they could have done more to protect these young men from online abuse. Ironically, social media platforms rely heavily on algorithms, that require no human intervention, to determine what does and doesn’t appear in an individual's news feed.
Diving deeper into football, more recently, a 'World's First' AI system has been implemented in Australia in the hopes of tackling the online abuse of football players. Reducing the impact of hate speech, be it racism, xenophobia, or homophobia, can have a significant impact on marginalised groups and members of society.
But how far can we go with this before it gets out of control? Whilst the idea of having proactive protection in place sounds promising, there is a chance that it could take away the notion of free speech, stimulating numerous arguments already afoot. Is using machine learning to hide negative speech a step in the right direction, or the beginning of an Orwellian state?
But what is freedom of speech? According to Oxford Languages, it is as followed:
free speech
noun
the right to express any opinions without censorship or restraint.
example: "it violated the first-amendment guarantee of free speech"
Free speech has been argued for decades and most of us have something to say about it. Google recently announced its AI-powered ‘Inclusive Warnings’ feature. The idea of this AI is to help eradicate phrases that could be considered harmful or outdated, such as using the term “crewed” as opposed to “manned”, or “police officer” as opposed to “policemen”. Ultimately this offers itself as a step in the right direction for correcting habitual language choices to include marginalised groups such as women and non-binary people in our lexicon. However, with necessary terms like ‘motherboard’ still being flagged, it may still need some fine-tuning.
Something to tweet about?
Further to this came the highly controversial news of Elon Musk buying Twitter for a staggering $44 billion this week. He claims he wants to see Twitter reach its "extraordinary potential," and he's “not even interested in gaining revenue from it”. All the while remaining the world's richest man (who doesn't pay income tax).
Musk claims he wants Twitter to be a platform for freedom of speech, getting rid of the censorship systems he himself has been on the receiving end of many times. Being many people’s main source of news media - could this present a serious threat to global democracy?
The problem is that, while censors in the 1990s believed that TV shows, video games, and explicit music could harm children, much of today's misinformation does. Letting the world's most powerful people run free with a platform as large as Twitter, unlimited free speech and control of information pose a threat to the many, not just children.
How far can we go to monitor what is said online? Should there always be human intervention, or will we one day be able to rely on an algorithm to decide what is and isn’t offensive?