Press play to listen to this article
Voiced by artificial intelligence.
By MARK SCOTT
Send tips here | Subscribe for free | View in your browser
WELCOME BACK TO DIGITAL BRIDGE. I’m Mark Scott, POLITICO’s chief technology correspondent, and I bring a warning for those of us who love what we do: there’s always another person’s job that’s better than yours. Exhibit One. I’m off to wallow in a large cup of coffee.
Buckle up, we’ve got a cracker this week:
— TikTok’s boss is in the hot seat in Washington. His grilling today shows how Washington blends national security with protectionism.
— Melanie Dawes, head of the British content regulator Ofcom, wants to hold Big Tech to account — but doesn’t want to police content itself.
— How best to combat increasingly sophisticated cybersecurity, disinformation and fraud threats? Boffins at Meta think they’ve got the answer.
WHAT TIKTOK IS, AND WHAT TIKTOK IS NOT
I WOULDN’T WANT TO BE IN SHOU CHEW’S SHOES. TikTok’s Singaporean chief executive will face a grilling Thursday in front of United States lawmakers (watch here from 10 a.m. ET / 2 p.m. CET). You can see his pre-bunk here and here. Allegations include: The Chinese-owned social media giant represents an existential threat to American national security; is in the pocket of the Chinese Communist Party; plays fast and loose with its users’ data; and can be used to manipulate Americans. Before we dig into this, let’s be clear: TikTok vehemently denies those allegations. But (most) Western governments don’t believe the company.
What’s clear is there’s a lot of smoke to these claims, but not much fire. TikTok admitted in December some of its employees had used the app to spy on reporters (in the search of company leaks) — something U.S. authorities are also now probing. Ireland’s privacy regulator is separately investigating whether the company complies with European Union data protection rules when shipping users’ information to China. The U.S. government and counterparts across Europe have banned the social media network from government officials’ smartphones. (That’s mostly a token gesture given the limited number of officials who have TikTok on their devices.)
TikTok has tried to get ahead of this. Its so-called Project Texas creates a separate structure for its U.S. users that technically has no ties to China. In Europe, the company’s separate Project Clover similarly plans to store local data within the 27-country bloc and out of the prying hands of the Chinese Communist Party. Still, U.S. officials aren’t biting after the all-powerful inter-agency Committee on Foreign Investment in the United States demanded ByteDance, TikTok’s parent company, sell off its popular social network or face a nationwide ban.
This comes down to two things: 1. Who actually owns TikTok? and 2. What pressure can Beijing potentially put on its owner? Although incorporated in the Cayman Islands, ByteDance is definitely Chinese, and Beijing-linked companies own a (very small) stake in its domestic operations. Technically, China’s government can demand access to the company’s inner workings, via a 2014 counter-espionage law and 2017 national intelligence law. Both statutes demand local firms assist with “state intelligence work,” though TikTok denies such requests would apply to it.
So is the West right to freak out about TikTok? The answer, annoyingly, is maybe. There’s no doubt China has the political muscle to bend ByteDance to its will — and TikTok’s terms of service allow for some pretty aggressive data-collection practices. But does that go beyond what American social media companies also ask for? No. It’s just that U.S. lawmakers aren’t worried as much about Meta or Alphabet holding your data (or American national security agencies accessing it) compared with when a Chinese-owned tech giant is doing the same. Call it protectionism, American style: It’s OK when Silicon Valley vacuums up data, less so when it’s someone else.
U.S. lawmakers are also trying to go hard on national security. Case in point: the so-called RESTRICT Act that would give the U.S. Commerce Department wide-ranging powers to determine if technology “in which any foreign adversary has any interest and poses undue or unacceptable risk to national security” can be used within the U.S. That language wouldn’t be amiss if it can from, say, the French and how they try to protect their country’s interests.
It’s also hard not to look past the diplomatic full-court press that Washington is doing to urge its allies to follow its lead on TikTok. Just as we saw in the U.S. pushback against Huawei — over similar accusations that the company was at China’s beck and call — U.S. officials were doing the rounds at the World Economic Forum in Davos, the Munich Security Conference and Mobile World Congress in Barcelona, according to two industry officials and three policymakers briefed on those discussions. The pitch: Get on board with us on TikTok, though three of these individuals said American officials did not provide any new evidence as to why other Western governments should move against the app.
This American-led charge has led to grumbling in some EU capitals that no one is actually sure why TikTok is a threat. Three European Commission officials, who met with Chew during his recent visit to Brussels, complained that only weeks after their meeting — in which they discussed the company’s compliance with the bloc’s new content rules and its aim to house European data locally — they were ordered to delete the app from their devices (though none of the individuals had downloaded it). “We went from having a cordial meeting to being told to delete TikTok,” said one of those officials, adding reasons for the Commission-wide ban had still not been made public. “I think our American cousins have some explaining to do,” he added.
MELANIE DAWES: CONTENT WARRIOR
FOR SOMEONE ACCUSED BY HER DETRACTORS of running the United Kingdom’s Ministry of Truth, Melanie Dawes is less an all-knowing Big Brother figure and more just a staid government wonk. Dawes is head of the country’s Office of Communications — more commonly known as Ofcom — and her agency will soon be in charge of new online content rules aimed at forcing social media platforms and search giants to take greater responsibility for how content is shared widely on their platforms.
But for Dawes, whose government career dates back more than 30 years, this is not about faceless officials deciding what people can, and cannot, say online. Instead, it’s about the policy wonkery of independent risk assessments conducted by the companies to see how potential harm may spread; outside audits to ensure these tech giants are complying with their own terms of service; and in-depth research to mitigate harm, especially abusive material aimed at children.
“It’s not really a regime about content,” she told me in one of Ofcom’s glass-fronted conference rooms overlooking the Thames in central London. “It’s about systems and processes. It’s about the design of the service that does include things like the recommender algorithms and how they work.” How Dawes envisages the U.K.’s Online Safety Bill is a transparency-focused stick that requires platforms to be accountable for their own policies — or face fines of up to 10 percent of global revenue (and even possible jail time for executives) if they don’t fall in line.
That’s certainly not how many would like Silicon Valley’s biggest names to be held to account over how hate speech and illegal content, like child sexual abuse material, can spread like wildfire online. But it’s also a realization that many also fret these new forms of online content rules will harm free speech. In the end, you can’t please everyone. “It’s not really about content and opining on particular bits of content as such,” said Dawes, who’s heading to Washington this week to meet with lawmakers and fellow regulators at the U.S. Federal Trade Commission and the Federal Communications Commission.
In the world of online content regulation, expect to hear a lot more about how (yawn) risk assessment and audits play into how regulators approach policing social media. The European Union and, soon, Canada are similarly following this model alongside the U.K. The goal, in theory, is to shine greater transparency on how these companies’ operations actually work in the hope of mandating change to systems that, until now, have been outside regulatory scrutiny.
“They will need to understand and explain to us how they run their algorithms,” said Dawes. “There’s no consistency and transparency about how they prevent harm. And, of course, there’s a fundamental business model which rewards scale and virality, but also amplifies really problematic harm.” Dawes said London would focus initially on reducing threats to children (in the form of pornography, child abuse and self-harm content) and those associated with terrorism.
There’s an international component, too. Sure, the U.S. isn’t going to have similar rules anytime soon. But the U.K. is already working with Ireland, Australia and Fiji to share best practices among content regulators. Dawes said her team was also talking to their counterparts in France and Germany, and Canada was also moving ahead with its own version of this social media-focused legislation. “We can be much more effective by having one single agreed approach and then implementing it in our different regimes or different jurisdictions in a consistent way,” she told me.
BY THE NUMBERS
COMMON LANGUAGE FOR FIGHTING ONLINE THREATS
WHEN IT COMES TO COUNTERING DISINFORMATION or hobbling a cyberattack, researchers have a question: how to best learn from others and share experiences to thwart the next potential threat? Meta’s Ben Nimmo and Eric Hutchins think they have an answer. The disinformation and cybersecurity experts just published a so-called kill chain of similar characteristics for different types of online attacks. It’s an effort to help those within platforms, in governments and across the wider research community to share best practices and, hopefully, stop these malign actors before they do real damage. That includes outlining how digital bad guys conduct their attacks — everything from how they create social media bots to how they use artificial intelligence to hide their tracks to how they spread those messages far and wide.
“We can map the paths that operations take and once we’ve mapped it, we can look for where we can trip them up,” Nimmo told me. Meta has incorporated the plan into its own work and opened it up — via publication with the Carnegie Endowment for International Peace, a think tank — so that others can use it, too. “Building that community, I think, is really important,” Hutchins, the report’s co-author, added. “The extent that (people) can identify an issue and share it is really critical. Adversaries are hoping that we’re putting ourselves into silos, rather than working across them.”
**Hesse Digital Minister Kristine Sinemus and Irish Data Protection Commisioner Helen Dixon will be speaking at POLITICO Live’s Europe Tech Summit on April 26-27. Don’t miss out on their remarks – shape the future of tech policy with them by attending. Register here.**
WONK OF THE WEEK
DIGITAL BRIDGE MUST BE GETTING OLD if we’re starting to redo wonks. But given that Tyson Barker, a former Berlin-based think-tanker, just joined the U.S. State Department as a senior adviser in the bureau of European and Eurasian affairs — primarily to focus on digital issues — it felt right to rehash someone who kindly stepped in to write this newsletter when I was on vacation last year.
This isn’t Barker’s first stint at State. He was also a Europe-focused adviser at the agency back in 2014, before joining a series of transatlantic-focused think tanks in Germany. Most recently, he was head of technology and foreign policy at the German Council on Foreign Relations.
“Washington’s approach to Europe is Republicans are hostile and Democrats are indifferent. The reason is because [of] the way the U.S. sees the world. Sure, Ukraine and Russia, that’s an acute challenge. But the chronic challenge remains China,” he told me last year.
THEY SAID WHAT, NOW?
“The first step is to immediately dissuade officials of the national government to install or use apps on mobile work devices from companies originating from countries with an offensive cyber program against the Netherlands and/or against Dutch interests,” Alexandra van Huffelen, the Dutch minister of digitization, told lawmakers in reference to apps from China, Russia, North Korea and Iran.
WHAT I’M READING
— Critical raw materials have become intertwined in the geopolitics of technology, given the role they play in the production of a series of emerging technologies, argues Raquel Jorge Ricart for the Elcano Institute.
— The U.S. Federal Trade Commission has a warning for anyone using artificial intelligence to deceive people: Stop it; it’s against the law. More here.
— Intensive lobbying from tech companies has reduced safety obligations, sidelined human rights and secured carve-outs for key AI products within the EU’s AI Act, claims Corporate Europe Observatory.
— Stat of the day: For 2021, the latest year available, more than 230,000 non-U.S. persons were targeted under Section 702 under Title VII of the U.S. Foreign Intelligence Surveillance Act. Read more here.
— Russian cyberthreat activity against Ukraine has adjusted its targeting and techniques to focus on intelligence gathering within Ukraine and supporting the Kremlin’s civilian and military assets, according to analysis from Microsoft.
— This last one isn’t tech-related, but you should read it: The United Nations’ latest report on the threat that climate change poses to the planet. It’s some scary stuff.
SUBSCRIBE to the POLITICO newsletter family: Brussels Playbook | London Playbook | London Playbook PM | Playbook Paris | POLITICO Confidential | Sunday Crunch | EU Influence | London Influence | Digital Bridge | China Direct | Berlin Bulletin | D.C. Playbook | D.C. Influence | Global Insider | All our POLITICO Pro policy morning newsletters