The European Union’s Digital Services Act (DSA) comes into force today, obliging “very large online platforms” to swiftly take down what unelected European Commission bureaucrats decide to define as ‘disinformation’.
As Laurie Wastell points out in the European Conservative, the DSA obliges online platforms to swiftly take down so-called disinformation. From today, the EC has at its disposal an aggressive enforcement regime, such that if Big Tech companies fail to abide by the EU’s ‘Strengthened Code of Practice on Disinformation’, which requires swift censorship of mis- and disinformation, then they can be fined up to 6% of their annual global revenue, investigated by the Commission, and potentially even prevented from operating in the EU altogether.
So, who is to say if something is misinformation? In the case of social media platforms operating within the EU, the EC is the arbiter of that, since it is the Commission that will decide if platforms like X and Facebook are doing enough to combat it. (It is the EU’s executive body, the EC, that is invested by the DSA with the exclusive power to assess compliance with the Code and apply penalties if a platform is found wanting.)
And what kind of speech is the DSA expected to police? The Code defines disinformation as “false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm”. That sounds innocent and apolitical enough. Yet the European Digital Media Observatory (EDMO), which was launched by the EC in June 2020 and aims to “identify disinformation, uproot its sources or dilute its impact”, appears to adopt a much broader, deeply politicized understanding of the term “misleading content”.
