Should social media platforms have a duty to care?

By Kim Connolly-Stone, Contributor. 14 October 2021, 8:28 am

Should social media platforms have a duty to care?

I reckon most of us would answer yes to this question these days, even tech libertarians who have stood up for a free and open Internet.

The reality is there is a lot of stuff people do “on” the Internet that is not good for society and can cause serious harm. Misinformation, terrorist and violent extremist content, bullying and hateful and discriminatory speech are just some of the concerns. This is one of the reasons InternetNZ has started talking about an Internet for Good, alongside an Internet for All.

Governments around the world are thinking about how they regulate the activities of social media platforms and whether a stronger hand is needed. I say “yes please”. In New Zealand, some of these questions will be asked as part of the work the Department of Internal Affairs is doing on media and online content regulation. The jury is still out on the scope of this work and some consultations are underway.

One place doing some work that fascinates me is the UK where they are talking about platforms having a “duty of care”. On the face of it it sounds pretty sensible. We do so much on these platforms. They should take care in the services they offer and how they go about their business. It’s not just about the content that gets served up, but how we are directed to it, what platforms do when bad stuff happens, and how transparent they are about it.

Duty of care?

The UK wants to be “the safest place in the world to go online, and the best place to start and grow a digital business“. They want to provide clarity to social media companies on what is expected of them, rather than just leaving it to their terms and conditions and policies on acceptable use. I’m down with that.
But this is easier said than done. It’s also a little confusing for lawyers who think about duty of care as a common-law thing. Let’s take a look (any legal beagles can check out the Draft Online Safety Bill).

A big part of the UK plan is for the platforms to show they are fulfilling their duty of care. What exactly is required will be set out in codes of practice written by Ofcom who will be able to take action if companies don’t comply.

There is a hook that allows alternative approaches to be used if it would have the same impact. This sounds like a bit of an advantage for the incumbents with the large legal departments.

Transparency reporting is on the agenda (yay), along with access to data for independent research, and a user complaints function that is overseen by the regulator.

Getting this duty of care thing right is going to be tricky for a number of reasons:

– It might be easy to put the principle of duty of care in law, but the hard work on the codes of practice that set the expectations is still getting going.

– Deciding what content is in scope is not easy. The UK plan is to include content that is illegal and some content that is harmful but not currently illegal. The responsible minister will come up with a list of the harmful things, which sounds like quite a hornet’s nest. The UK lawyer Graeme Smith has explored some of the complications here: Harm Version 3.0: the draft Online Safety Bill.

– The fun with definitions continues in deciding which providers of Internet services are in scope. They are talking about user-generated content sharing providers and search engines. This is a broad church.

– Enforcement of the duty of care probably won’t be a magic wand to make platforms do the things we want. Ofcom as the regulator won’t have unlimited resources and will need to make choices about the things they enforce or pursue. I can’t imagine being able to call them up about a takedown request that is taking too long to action.

– All of these open questions make it hard for people and online services to know what the rules will be, and how they will work in practice.
A lot of these issues relate to a question we’ll need to grapple with in New Zealand too: what do we mean when we’re talking about safety and harm online?

The Internet is global, and so are many of the problems, and the big services that governments and regulators around the world are responding to. In Aotearoa, we need to grapple with these same questions, but we don’t need to accept the same answers as everyone else. We have the chance to figure out what makes sense for us. So when you get the chance to have your say in the review of online content regulation, do it.

Kim Connolly-Stone is Policy Director at InternetNZ

Source: ITP New Zealand Tech Blog

Posted in Uncategorised and tagged .