European Union secretly pressures U.S. firms to censor immigration criticism, memes: House GOP
"We need to take back our country" is "coded language to express anti-Muslim sentiment," off-the-record workshop tells platforms. Poland complains to TikTok about post calling electric cars "neither ecological nor economical."
The European Union is secretly leaning on tech platforms to censor routine political speech and even jokes as a legal obligation under its Digital Services Act, according to an interim staff report Friday by the Republican-led House Judiciary Committee, which has also probed Brazil's censorship, Biden administration jawboning and ideological advertiser boycotts.
Committee Chairman Jim Jordan, R-Ohio, said it was prompted by then-EU Commissioner for Internal Market Thierry Breton's threat against X last summer, later disavowed by the European Commission, that owner Elon Musk's scheduled livestream with then-presidential candidate Donald Trump might constitute "illegal content" under the DSA.
Though Breton resigned "under pressure from EU President Ursula von der Leyen" after Jordan demanded a briefing from Breton on his threats, his successor, Henna Virkkunen, "remains strongly supportive of the DSA’s censorship provisions and continues to enforce them against American companies," the report says.
"Camouflaged as a regulation to increase online safety," the DSA lets European regulators "suppress speech globally" by threatening fines up to 6% of global revenue against platforms, based anywhere, that refuse to censor "humor, satire, and core political speech" that offends bureaucrats and align content moderation with EC preferences, it says.
The law empowers them to "temporarily shut down platforms within the EU" if "extraordinary circumstances lead to a serious threat to public security or public health in the Union."
Platforms must allow "certified third-party arbitrators to resolve content moderation disputes," who "do not need to be independent from the European regulators who certify them, incentivizing arbitrators to heed regulators’ censorship demands," the report says.
Because "platforms bear the cost when they lose at arbitration," they are also incentivized to censor flagged content "before arbitration begins."
The DSA has an "arbitrary threshold" of 45 million monthly users to qualify as a strictly regulated "very large online platform," seemingly chosen to "sweep in major American companies while carving out Europe’s top tech companies," with only Booking.com and "pornography websites" qualifying, the report says.
The EC "invented workarounds" to exempt other European companies from VLOP classification, for example Spotify, which gets to split its products between music and podcasts "for the purpose of counting EU users," the report says. It cites a critic that alleges a "clear discrepancy" between "self-declared" monthly users and "reality."
"Absolutely nothing in the DSA requires a platform to remove lawful content," EC spokesperson Thomas Regnier told Politico EU in response to the staff report, claiming freedom of expression is "a fundamental right in the EU" and "at the heart of our legislations."
Regnier said "content removals based on regulatory authorities’ orders to act against illegal content account for less than 0.001 percent" of the content moderation decisions, with platforms "proactively" deciding the rest based on their own terms and conditions.
'I'm not racist, but …' is 'coded language to express anti-Muslim sentiment'
The committee's subpoenas revealed content from the EC's May 7 workshop with DSA stakeholders, which unlike its "contemporaneous" Digital Markets Act workshops was closed to the public and operated under the Chatham House Rule, banning participants from describing "exercise scenarios" or naming or quoting participants without permission.
It also obtained emails between EC staff and tech companies on purportedly "voluntary" codes of conduct on hate speech and disinformation, showing "regulators repeatedly and deliberately reached out to pressure reluctant platforms to join" and retaliated against resisters, opening a probe of X for refusing to use purported fact-checkers.
"The censorship is largely one-sided, almost uniformly targeting political conservatives," the report's press release says.
Twenty pages of the 145-page staff report, the first of 22 exhibits, are just the May 7 workshop agenda, rules, participants and "scenarios and questions" for each of the eight sessions across four tracks, in which participants share their proposed "interventions" for the fictional platform "Delta" in moderated discussions.
One scenario labeled the phrases "We need to take back our country" and "I'm not racist, but …" as "illegal hate speech" that uses "coded language to express anti-Muslim sentiment" when posted as comments on the meme "Terrorist in disguise," showing a woman in a hijab.
Its associated questions ask what processes Delta should have to "review and update terms and conditions" – with no indication these can be limited to Europe – in response to DSA-designated risks.
Participants should ponder how content moderation can "address the use of coded language or memes that may be used to spread hate speech or discriminatory ideologies" such as the hijab example, and how Delta can "cooperate with trusted flaggers, other providers, or civil society organizations" to identify and halt such content.
"The panel on disinformation was definitely the most difficult, because of the presence of fact-checkers," according to an American company's notes that also rate CSOs by name.
"CSOs claimed moderation efforts must go beyond illegality and also better address harmful content and disinformation aimed at dehumanising or inciting hate," and some suggested "labelling is not enough" for even legal hate speech," the notes said.
The Institute for Strategic Dialogue is "quite aggressive and critical" of platforms that resist self-identified fact-checkers and Access Now supports removal of "everything that can be considered as hateful and harmful," even legal.
The EC-funded European Digital Media Observatory is the "most aggressive" and claims that Community Notes – X's crowdsourced fact-checking system that Facebook copied earlier this year, ditching third-party checkers like EDMO – "don't work," the notes say.
Don't badmouth electric cars
The speech threat goes beyond decisions made at the EU level because "European courts have empowered national regulators to issue global content removal orders," Jordan said, citing a 2019 ruling that the electronic commerce directive "does not preclude a court of a Member State from" ordering a host provider to remove information.
Internal TikTok documents show Poland's National Research Institute asked the platform to censor a post, apparently paraphrased, that "suggested that electric cars are neither ecological nor an economical solution."
The French National Police directed X to remove a U.S.-based post that Jordan characterized as "satirically noting that permissive French immigration and citizenship policies may have caused a violent attack by a Syrian asylum seeker." The request in Exhibit 9 is labeled "Violent Threat/Incitement, Illegal Content."
Written in both German and English in Exhibit 10, German authorities demanded X remove a post that shared an article about a Syrian family that reportedly "committed 110 criminal offences" and commented "Deport the whole lot of them!"
This violates the German criminal code because "hatred is incited against a national group (Syrians) and violence and arbitrary measures are called for," and an interstate treaty on "human dignity" by "insulting, maliciously denigrating or defaming parts of the population or this group" defined by its ethnicity, the German demand says.
The Facts Inside Our Reporter's Notebook
Links
- interim staff report
- Brazil's censorship
- Biden administration jawboning
- ideological advertiser boycotts
- Jim Jordan, R-Ohio, said it was prompted
- Thierry Breton's threat against X last summer
- disavowed by the European Commission
- Jordan demanded a briefing from Breton on his threats
- Booking.com
- "clear discrepancy" between "self-declared" monthly users and "reality."
- Politico EU
- Digital Markets Act workshops
- Chatham House Rule
- press release
- One scenario labeled the phrases
- "review and update terms and conditions"
- "address the use of coded language or memes
- American company's notes
- Facebook copied earlier this year, ditching third-party checkers like EDMO
- Jordan said
- 2019 ruling
- electronic commerce directive
- Poland's National Research Institute asked
- French National Police directed X
- German authorities demanded X