Regulation and accountability: How to save the internet
Platform governance has slipped the moorings of national law and democratic accountability, while “regulation by outrage” has filled the policy gap.
The European Commission writes: “like any communication [technology] … the Internet carries an amount of potentially harmful or illegal content or can be misused as a vehicle for criminal activities … [these] are pressing issues of public, political, commercial and legal interest … Recent political discussions in the European Union have stressed the need for urgent action and concrete solutions … which should be put in place rapidly.”
This excerpt, however, is not from a recent report, regulation, or directive, but from a 1996 communication on illegal and harmful online content. Areas of concern in those days included national security, protection of minors, protection of human dignity, economic security, malicious hacking, protection of privacy, protection of reputation, and intellectual property – a list as relevant today as it was 24 years ago.
Platforms disrupt policymaking
Since then, there has been a vast amount of policy activity, soft law, multi-stakeholder dialogue, and “voluntary” industry activity. So, perhaps we should ask: why – after a generation of debate, regulation, and legislation – have governments and platforms been unable to provide convincing answers to enduring political and public concerns?
When the Commission wrote that communication, there were only 160m internet users and 10m websites worldwide. Now, there are 4 billion internet users and well over 1 billion websites. Smartphones are ubiquitous as the primary way of accessing the internet for many.
Most growth in internet usage today comes from developing countries, where one in two internet users is a child.
Platforms emerged to help consumers navigate the vast amount of content and services, goods and suppliers, available online. They provide a form of centralised control of the internet’s massively open markets. They have written rules – Facebook’s community standards, Uber’s driver requirements – but these are arguably less important than the implicit rules embedded in the algorithms that sort, rate, rank, and recommend consumers’ choices. A common complaint is that these new players are “lawless”, but a deeper concern may be that they are “lawmakers”, in terms of code, algorithms, and data.
Sometimes, platforms’ commercial incentives are aligned with consumers’ interests and desired policy outcomes – for example, Google’s largely successful campaign against spam in the early 2000s, and eBay’s efforts to reduce fraud on its platform, including its work with law enforcement agencies. But, often, they are not – as we have seen with a wide range of content-related harms.
The underlying problem is that platform governance has slipped the moorings of national law and democratic accountability, and that has proved unsustainable. “Regulation by outrage” has filled the policy gap: a problem is identified, media coverage intensifies, political pressure is applied, and threats of regulation are issued. Platforms respond with mea culpas and promises to do better.
Initiatives are launched, either by individual companies, or at industry level, and with varying degrees of involvement from regulators and the government. All parties are able to claim “something has been done” – but exactly what, and to what effect, may remain unclear.
Now, governments in the United Kingdom, France, Germany, Italy, Spain, Australia, Canada, Singapore, Sri Lanka, and even the United States, and no doubt others, are exploring ways of getting more traction on the platform economy. How they go about it, and the tools and frameworks they use, is arguably the most important issue in technology and regulation today.
The method deficit
It is strange, therefore, that there has been no systematic attempt by governments – in the UK, continental Europe, or (as far as I know) any other developed democracy – to review how rules are made for the platform economy.
Many reviews, inquiries, and commissions have identified alleged problems with platform markets, and some have suggested remedies including regulation of platform operators. But extending existing sector rules and frameworks is not the right approach; as Edith Ramirez, the former chair of the Federal Trade Commission, put it in 2015: “existing regulatory schemes tend to mirror, and perhaps even entrench, traditional business models and thereby chill pro-consumer innovation.”
More generally, many commentators have pointed out that prescriptive, rules-based regulation is unlikely to work in platform markets. Platform governance is dynamic, data-driven, and iterative. Problems manifest in different ways on different platforms, and evolve over time. Each platform will need to develop bespoke responses to the particular challenges it faces, and review its strategy in response to changing user behaviours. Ensuring consumer choice and competition between platforms is part of the solution, and policymakers should be alert to the possible anti-competitive effects of interventions.
Is it time to retire “regulation” and focus on “accountability”?
In this environment, blunt, one-size-fits-all regulation is likely to have unintended consequences. At best, rules may only address part of the full spectrum of platform governance activities. For example, the draft EU Terrorist Content Regulation empowers national authorities to order platforms to take content down, with penalties for failing to do so expeditiously. But notice-and-takedown regimes belong to an earlier technological era, before the development of automated tools by the bigger platforms – which identify 99 percent of blocked or removed content without any human involvement.
A good policy would engage with the effectiveness of those tools, both in correctly identifying illegal content and in not inadvertently blocking legal material. But it would also recognise that not all platforms need, or are able, to adopt the same solution. Policy made with Facebook and Google in mind often results in rules that apply indiscriminately to the whole industry. This comes at great cost and anti-competitive impact; worse, it locks in specific technical solutions that may be wholly unsuited to the way problems will develop in future.
We need new models of co-governance designed for today’s fast-moving, massively open but also highly centralised platform markets. The central issue is how responsibilities should be divided between the government, parliament, independent regulators/supervision bodies, platforms, and users – all of whom have a role to play in securing the benefits, and mitigating the risks, of these markets.
Starting points
Arguably, attempts to address this issue are starting a decade too late. But there is a growing consensus that both regulation by lawmakers and self-regulation have failed to achieve their intended goals. As governments in Europe and around the world consider new approaches, it may be helpful to think about why this is, and what today’s policymakers can learn from previous experience. Here are some possible questions to consider.
Firstly, is it time to retire “regulation” and focus on “accountability”? Perhaps regulation has become too broad a concept to be useful. Regulating platforms as if they were broadcasters, retailers, or taxi firms is likely to go wrong. If the job of the government is not to tell platforms what to do, but to supervise (put in place systems to assess the effectiveness of platform policies and ensure a proportionate, evidence-based response to public concerns), does that make it easier to think about legislation and the task of platform “regulators”?
Secondly, what does “good behaviour” by platforms look like? Some would say, “respect for human rights”. But platform governance is about balancing rights – to free speech, dignity, privacy, the conduct of business, and so on. This is inevitably controversial; there is no “right balance” to be struck. Is “good behaviour” more about due process – meeting procedural standards – than achieving some unrealistic standard of perfection? If so, what are those standards? And how do policymakers ensure expectations are proportionate, given that different issues manifest differently on different platforms, and there is a risk that regulation will act as a barrier to entry?
Thirdly, is Europe constitutionally unsuited to regulating online content and conduct? The EU has led the way in competition and data protection because member states are (broadly) aligned and content to allow the EU to lead. This is not true of content and speech, where a broadly shared commitment to human rights has coexisted with very different national legal regimes and cultural perspectives. The result has been regulation that is both inappropriately prescriptive and unacceptably vague (such as the Copyright Directive and the Terrorist Content Regulation), with the courts left to fill in the details; this is hardly a good model for dealing with the increasingly wide range of issues platforms are being asked to address. What are the alternatives? Is there a specifically “European” approach to platform supervision? How can it be successful in light of the alternative (US or Chinese) models?
Some industry players have been understandably reluctant to engage proactively with the internet regulation debate, at least publicly. But more inclusive discussion of the institutions and mechanisms required to align platform governance with policy goals would benefit all participants.
This article is a concept note that was written by Mark Bunting for a joint workshop organised by ECFR and Telefonica. Bunting was a partner at Communications Chambers at the time of writing and prior to that, a Visiting Associate at the Oxford Internet Institute and independent media and technology policy adviser.
The European Council on Foreign Relations does not take collective positions. ECFR publications only represent the views of their individual authors.