126 lines
7.4 KiB
Markdown
126 lines
7.4 KiB
Markdown
+++
|
||
title = "How the UK's Online Safety Bill threatens Matrix"
|
||
date = "2021-05-19T15:47:03Z"
|
||
updated = "2021-05-19T14:48:44Z"
|
||
path = "/blog/2021/05/19/how-the-uk-s-online-safety-bill-threatens-matrix"
|
||
aliases = ["/blog/2021/05/19/how-the-u-ks-online-safety-bill-threatens-matrix"]
|
||
|
||
[taxonomies]
|
||
author = ["Denise Almeida"]
|
||
category = ["Tech", "General"]
|
||
|
||
[extra]
|
||
image = "https://matrix.org/blog/img/2021-05-19-brazil.jpg"
|
||
+++
|
||
|
||
Last week the UK government published a [draft of the proposed Online Safety
|
||
Bill,](https://www.gov.uk/government/publications/draft-online-safety-bill)
|
||
after having initially introduced [formal proposals for said bill in early
|
||
2020](https://www.gov.uk/government/consultations/online-harms-white-paper/public-feedback/online-harms-white-paper-initial-consultation-response).
|
||
With this post we aim to shed some light on its potential impacts and explain
|
||
why we think that this bill - despite having great intentions - may actually
|
||
be setting a dangerous precedent when it comes to our rights to privacy,
|
||
freedom of expression and self determination.
|
||
|
||
The proposed bill aims to provide a legal framework to address illegal and
|
||
harmful content online. This focus on “not illegal, but harmful” content is at
|
||
the centre of our concerns - it puts responsibility on organisations
|
||
themselves to arbitrarily decide what might be harmful, without any legal
|
||
backing. The bill itself does not actually provide a definition of harmful,
|
||
instead relying on service providers to assess and decide on this. This
|
||
requirement to identify what is “likely to be harmful” applies to all users,
|
||
children and adults. Our question here is - would you trust a service provider
|
||
to decide what might be harmful to you and your children, with zero input from
|
||
you as a user?
|
||
|
||
Additionally, the bill incentivises the use of privacy-invasive age
|
||
verification processes which come with their own set of problems. This
|
||
complete disregard of people’s right to privacy is a reflection of the
|
||
privileged perspectives of those in charge of the drafting of this bill, which
|
||
fails to acknowledge how _actually_ harmful it would be for certain groups of
|
||
the population to have their real life identity associated with their online
|
||
identity.
|
||
|
||
Our view of the world, and of the internet, is largely different from the one
|
||
presented by this bill. Now, this categorically does not mean we don’t care
|
||
about online safety (it is quite literally our bread and butter) - we just
|
||
fundamentally disagree with the approach taken.
|
||
|
||
Whilst we sympathise with the government’s desire to show action in this space
|
||
and to do something about children’s safety (everyone’s safety really), we
|
||
cannot possibly agree with the methods.
|
||
|
||
Back in October of 2020 we presented [our proposed approach to online safety](https://matrix.org/blog/2020/10/19/combating-abuse-in-matrix-without-backdoors) -
|
||
ironically also in response to a government proposal, albeit about encryption
|
||
backdoors. In it, we briefly discussed the dangers of absolute determinations
|
||
of morality from a single cultural perspective:
|
||
|
||
> <a href="https://matrix.org/blog/2020/10/19/combating-abuse-in-matrix-without-backdoors">As uncomfortable as it may be, one man’s terrorist is another man’s freedom fighter, and different jurisdictions have different laws - and it’s not up to the Matrix.org Foundation to play God and adjudicate.</a>
|
||
|
||
We now find ourselves reading a piece of legislation that essentially demands
|
||
these determinations from tech companies. The beauty of the human experience
|
||
lies with its diversity and when we force technology companies to make calls
|
||
about what is right or wrong - or what is “likely to have adverse
|
||
psychological or physical impacts” on children - we end up in a dangerous
|
||
place of centralising and regulating relative morals. Worst of all, when the
|
||
consequence of getting it wrong is criminal liability for senior managers what
|
||
do we think will happen?
|
||
|
||
Regardless of how omnipresent it is in our daily lives, technology is still
|
||
not a solution for human problems. Forcing organisations to be judge and jury
|
||
of human morals for the sake of “free speech” will, ironically, have severe
|
||
consequences on free speech, as risk profiles will change for fear of
|
||
liability.
|
||
|
||
Forcing a “duty of care” responsibility on organisations which operate online
|
||
will not only drown small and medium sized companies in administrative tasks
|
||
and costs, it will further accentuate the existing monopolies by Big Tech.
|
||
Plainly, Big Tech can afford the regulatory burden - small start-ups can’t.
|
||
Future creators will have their wings clipped from the offset and we might
|
||
just miss out on new ideas and projects for fear of legal repercussions. This
|
||
is a threat to the technology sector, particularly those building on emerging
|
||
technologies like Matrix. In some ways, it is a threat to democracy and some
|
||
of the freedoms this bill claims to protect.
|
||
|
||
These are, quite frankly, steps towards an authoritarian dystopia. If Trust &
|
||
Safety managers start censoring something as natural as a nipple on the off
|
||
chance it might cause “adverse psychological impacts” on children, whose
|
||
freedom of expression are we actually protecting here?
|
||
|
||
More specifically on the issue of content moderation: the [impact assessment
|
||
provided by the government alongside this
|
||
bill](https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/985283/Draft_Online_Safety_Bill_-_Impact_Assessment_Web_Accessible.pdf)
|
||
predicts that the additional costs for companies directly related to the bill
|
||
will be in the billions, over the course of 10 years. The cost for the
|
||
government? £400k, in every proposed policy option. Our question is - why are
|
||
these responsibilities being placed on tech companies, when evidently this is
|
||
a societal problem?
|
||
|
||
We are not saying it is up to the government to single-handedly end the
|
||
existence of Child Sexual Abuse and Exploitation (CSAE) or extremist content
|
||
online. What we are saying is that it takes more than content filtering, risk
|
||
assessments and (faulty) age verification processes for it to end. More
|
||
funding for tech literacy organisations and schools, to give children (and
|
||
parents) the tools to stay safe is the first thing that comes to mind. Further
|
||
investment in law enforcement cyber units and the judicial system, improving
|
||
tech companies’ routes for abuse reporting and allowing the actual judges to
|
||
do the judging seems pretty sensible too. What is absolutely egregious is the
|
||
degradation of the digital rights of the majority, due to the wrongdoings of a
|
||
few.
|
||
|
||
Our goal with this post is not to be dramatic or alarmist. However, we want to
|
||
add our voices to the countless [digital rights
|
||
campaigners](https://www.openrightsgroup.org/blog/online-abuse-why-management-liability-isnt-the-answer/),
|
||
individuals and organisations that have been raising the alarm since the early
|
||
days of this bill. Just like with coercive control and abuse, the degradation
|
||
of our rights does not happen all at once. It is a slippery slope that starts
|
||
with something as (seemingly) innocuous as [mandatory content scanning for
|
||
CSAE content and ends with authoritarian surveillance
|
||
infrastructure](https://twitter.com/matthew_d_green/status/1392823038920564736).
|
||
It is our duty to put a stop to this before it even begins.
|
||
|
||
|
||
<small style="display: block; text-align: right">
|
||
Twitter card image credit from <a href="https://film-grab.com/2010/10/04/brazil/#bwg644/39614">Brazil</a>, which feels all too familiar right now.
|
||
</small>
|