Connect with us

The Interface

Apple and Google’s COVID-19 notification system won’t work in a vaccum



Last month, before Google and Apple announced their joint effort to enable COVID-19 exposure notifications, I wrote about the trouble with using Bluetooth-based solutions for contact tracing. Chief among the issues is getting a meaningful number of people to download any app in the first place, public health officials told me. And now that such apps are being released in the United States, we’re seeing just how big a challenge that is.

Here’s Caroline Haskins writing Tuesday in BuzzFeed:

Utah Gov. Gary Herbert said on April 22 that the app, Healthy Together, would be an integral part of getting the state back on its feet: “This app will give public health workers information they need to understand and contain the pandemic and help Utahns get back to daily life.”

The state spent $2.75 million to purchase the app and is paying a monthly maintenance fee of $300,000, according to contracts obtained by BuzzFeed News. But as of May 18, just 45,000 of the state’s 3.2 million people had downloaded Healthy Together, according to Twenty.

That’s roughly 1.4 percent adoption, well below the 60 percent or so that public health officials say is necessary to make such exposure notifications effective. And it bodes ill for other states’ efforts to distribute their own apps, particularly in a world where the federal response continues to be confused and even counterproductive.

But a new reason for hope arrived today, in the form of an official release of the Apple/Google exposure notification protocol. The system, which allows official public health apps to use system-level Bluetooth features to help identify potential new cases of COVID-19, is now available as an update to iOS and Android. Three states are working on projects so far, Russell Brandom reported today at The Verge:

Alabama is developing an app in connection with a team from the University of Alabama, while the Medical University of South Carolina is heading up a similar project in collaboration with the state’s health agency.

Most notably, North Dakota is planning to incorporate the system into its Care19 app, which drew significant criticism from users in its early versions.

“As we respond to this unprecedented public health emergency, we invite other states to join us in leveraging smartphone technologies to strengthen existing contact tracing efforts,” North Dakota Gov. Doug Burgum said in a statement, “which are critical to getting communities and economies back up and running.”

In a call with reporters today, Apple and Google said 22 countries have received API access to date. Later this year, an update to iOS and Android will allow people to begin participating in the program even if they haven’t yet downloaded an official public health app.

But as we’ve discussed here before, the best-designed tech interventions won’t be effective if they’re not supported by contact tracing and isolation of new cases. So let’s check in to see how we’re doing on those fronts.

“Contact tracing” was the name originally given to the Apple/Google initiative, before the companies acknowledged that what they were doing didn’t quite live up to that standard. The term refers to getting in touch with people who may have been exposed to a disease and directing them to testing and other resources, and the current consensus view is that this work is best done by human beings. The Apple/Google system, which has been rebranded as “exposure notification,” is intended to augment the work of human contact tracers.

Around the United States and the world, public health departments are hiring people as contact tracers. Officials estimate that we will need at least 100,000 such workers in the United States, and by April 11,000 had already been hired. California and Massachusetts began hiring early; Illinois, Georgia, and Texas are among the states that have followed. There’s clearly much more work to be done, and quickly, but here’s a case where federal inaction hasn’t totally stopped states from developing a response. (More federal money to hire contact tracers would help, though.)

And what about isolating new cases? This is the process whereby people infected with COVID-19 temporarily relocate to government-run facilities to receive care in an environment where they are unlikely to spread the disease to others. Israel and Denmark are among the countries that have been using such facilities. San Francisco, in a possible step toward a contact isolation program, has begun paying to house people who were homeless in hotels as a measure to reduce the spread of the disease.

Lyman Stone makes the case for rapid deployment of contact isolation in the Washington Post. He imagines a world in which people receive tickets for failing to comply with orders to isolate — but also one in which the facilities themselves are nice enough to get people to go along with the idea voluntarily:

This system also encourages compliance because the centralized facilities would provide isolated individuals with all their basic needs (plus daily supervision so they would get treatment if they become sick). Food and medication can be delivered, WiFi would be free, and governments should provide financial compensation for lost work time. And, since covid-19 is much less dangerous to kids, families could choose for their children to be quarantined with them or separately, whichever they prefer. All of this would require legislation by state governments, but none of it is infeasible.

Alas, contact isolation sounds scary to many people. It conjures images of internment, stigmatization or family separation. But the truth is that the curtailment of our liberties would be minuscule compared with the society-wide lockdowns Americans have been enduring.

At a time when all of us are looking for answers to the pandemic, an approach that combines testing, tracing, and isolation appears to be as close to a sure thing that we have, short of a vaccine. Caroline Chen looked at the research Tuesday in ProPublica:

Researchers in the U.K. used a model to simulate the effects of various mitigation and containment strategies. The researchers estimated that isolating symptomatic cases would reduce transmission by 32%. But combining isolation with manual contact tracing of all contacts reduced transmission by 61%. If contact tracing only could track down acquaintances, but not all contacts, transmission was still reduced by 57%.

A second study, which used a model based on the Boston metropolitan area, found that so long as 50% of symptomatic infections were identified and 40% of their contacts were traced, the ensuing reduction in transmission would be sufficient to allow the reopening of the economy without overloading the health care system. The researchers picked Boston because of the quality of available data, according to senior author Yamir Moreno, a professor at the institute for biocomputation and physics of complex systems at the University of Zaragoza in Spain. “For other locations, these percentages will change, however, the fact that the best intervention is testing, contact tracing and quarantining remains,” he said.

The Apple/Google collaboration represents a chance to use the companies’ vast size and power to make a positive contribution to public health during a crisis. But it will only ever be one piece of the puzzle — and not necessarily one of the larger pieces, either. The good news is that we increasingly understand how COVID-19 can be brought under control. The open question is whether the United States government, to which we have entrusted the job of keeping us all safe, will do what is necessary to make it happen.

Virus tracker

Total cases in the US: More than 1,547,300

Total deaths in the US: At least 92,600

Reported cases in California: 84,449

Total test results (positive and negative) in California: 1,339,316

Reported cases in New York: 359,235

Total test results (positive and negative) in New York: 1,467,739

Reported cases in New Jersey: 150,399

Total test results (positive and negative) in New Jersey: 520,182

Reported cases in Illinois: 98,300

Total test results (positive and negative) in Illinois: 621,684

Data from The New York Times. Test data from The COVID Tracking Project.


Twitter won’t add a “misleading” label to an article shared by Trump’s campaign manager, Brad Parscale, that claims hydroxychloroquine has a “90 percent chance of helping” COVID-19 patients. Even though the claim is misleading, Twitter says it won’t add a label because the link contains no direct call for action. Here’s Adi Robertson at The Verge:

The incident is an early test of Twitter’s expanding fight against misleading health information. This month, Twitter started labeling tweets that made false or disputed claims about the novel coronavirus, in addition to removing misinformation that could incite harm. A company spokesperson, however, said the tweet is “currently not in violation of the Twitter Rules and does not qualify for labeling.” Twitter says it’s prioritizing tweets that contain a potentially harmful call to action; it’s singled out messages that encouraged people to damage 5G cell towers, for instance. It says it won’t step in to label all tweets that contain unverified or disputed information about the coronavirus.

So far, Facebook also hasn’t made a call on whether the story violates its anti-misinformation rules. But a Facebook spokesperson told The Verge that the article would likely be eligible for fact-checking. The platform typically flags content that’s rated entirely or partially false, warning users and reducing its reach.

China has launched a Twitter offensive in the COVID-19 information war. Twitter output from China’s official sites has almost doubled since January, and the number of diplomatic Twitter accounts has tripled. In recent days, these accounts have been spreading a conspiracy theory that the virus came from a government lab in the US. (Anna Schecter / NBC)

Here’s how “Plandemic” went from a niche conspiracy video about COVID-19 to a mainstream phenomenon. This account includes a blow-by-blow look at who shared what, and when. (Sheera Frenkel, Ben Decker and Davey Alba / The New York Times)

The Israeli surveillance firm NSO Group created a web domain that looked as if it belonged to Facebook to entice targets to click on links that would install the company’s powerful phone hacking technology. Facebook is already suing the surveillance firm for leveraging a vulnerability in WhatsApp to let NSO clients remotely hack phones. (Joseph Cox / Vice)

Facebook hired Aparna Patrie, a Senate Judiciary attorney, to its public policy team amid ongoing antitrust scrutiny. Patrie served as committee counsel under Sen. Richard Blumenthal. (Keturah Hetrick / LegiStorm)

Google signed a deal with the US Department of Defense to build cloud technology designed to detect and respond to cyberthreats. The news comes two years after workers at the search giant protested Google’s contract with the Pentagon for Project Maven, an initiative that used AI to improve analysis of drone footage. (Richard Nieva / CNET)

A judge in Singapore sentenced a man to death via a Zoom call for his role in a drug deal. It’s one of just two known cases where a capital punishment verdict has been delivered remotely. (John Geddie / Reuters)

The rollout of Twitch’s Safety Advisory Council has been a disaster. this piece argues. The group is supposed to advice on issues of safety and harassment, and one of the council members has already become the target of harassment since joining. (Nathan Grayson / Kotaku)


ByteDance’s valuation has risen to more than $100 billion in recent private share transactions. The news reflects expectations that TikTok’s parent company will keep pulling in new advertisers. Here’s Bloomberg’s Lulu Yilun Chen, Vinicy Chan, Katie Roof, and Zheping Huang:

“The trading of ByteDance is reflective of the global wave of consumers who agree that ByteDance can displace Facebook as the leading social network,” said Andrea Walne, a partner at Manhattan Venture Partners who follows the secondary markets. […]

ByteDance has grown into a potent online force propelled in part by a TikTok short video platform that’s taken U.S. teenagers by storm. Investors are keen to grab a slice of a company that draws some 1.5 billion monthly active users to a family of apps that includes Douyin, TikTok’s Chinese twin, as well as news service Toutiao. That’s despite American lawmakers raising privacy and censorship concerns about its operation. This week, it poached Walt Disney Co. streaming czar Kevin Mayer to become chief executive officer of TikTok.

Twitter is testing a way to let you limit how many people can reply to your tweets. If you’re part of the test, when you compose a tweet, you’ll be able to select if you’ll allow replies from everyone, people you follow, or only people you @ mention. There are a lot of interesting implications here with regard harassment and abuse — and also free expression. Jay Peters at The Verge has the story:

Limiting who can reply to your tweets could help prevent abuse and harassment on the platform. By keeping replies to a limited set of people, in theory, you could have more thoughtful and focused conversations with people of your choosing without the risk of trolls jumping into the conversation.

Facebook’s new AI tool will automatically identify items people put up for sale. The company’s “universal product recognition model” uses artificial intelligence to identify consumer goods, from furniture to fast fashion to fast cars. (James Vincent / The Verge)

Deutsche Bank analysts say Facebook’s push into online shopping could generate a $30 billion jump in annual revenue. The company will make money off transaction fees, as well as a possible increase in advertising dollars. (Rob Price / Business Insider)

Mark Zuckerberg went on CBS to discuss Shops. The interview also gets into Facebook’s responsibility to manage misinformation on the platform.

Facebook will limit offices to 25 percent occupancy, put people on shifts and require temperature checks when it lets employees back into workplaces in July. Staff will also have to wear masks in the office when not social distancing. (Mark Gurman and Kurt Wagner / Bloomberg)

Video chat tools like Meet, Zoom, BlueJeans serve as meeting emulators. They attempt to copy and repeat the form of the meeting, but don’t capture the actual interactions, this writer argues. True! (Paul Ford / Wired)

Zoom suspended its free service to people in China. As of May 1st, individual free users can no longer host meetings on Zoom, but will still be able to join them. (Yifan Yu / Nikkei)

YouTube added bedtime reminders to help people log off late at night. The feature is part of a broader set of YouTube wellness and screen time tools released in 2018 as part of Google’s Digital Wellbeing initiative. A charming throwback to the days when we cared about screen time. (Nick Statt / The Verge)

The secure messaging app Signal added PINs, a new feature to help people move their profiles across devices. The move is intended to make the company less reliant on phone numbers as its users’ primary identification. (Bijan Stephen / The Verge)

People are hiding their social distance lapses from social media, a reversal of the typical use of Instagram where people once bragged about their social activities. All the secret quarantine relationships happening right now will make for a great Netflix series in 2025. (Kaitlyn Tiffany / The Atlantic)

Students are failing AP tests because the College Board testing portal doesn’t support the default photo format on iPhones. Students now have to spend weeks studying before retaking the test. Interfaces are important! Someone should start a newsletter about them! (Monica Chin / The Verge)

Things to do

Stuff to occupy you online during the quarantine.

Update your phone. You’ll need to have the latest version of iOS or Android to begin participating in exposure notification.

Check out beloved satirical website Clickhole, which returned on Wednesday under its new ownership.

Play Crucible, the first big video game developed by Amazon. The Verge’s Nick Statt found the shooter derivative but uniquely enjoyable.

And finally…

The joke in the above tweet is that Twitter disabled replies using the new audience-limiting features it unveiled today. Related joke:

Talk to us

Send us tips, comments, questions, and exposure notifications: and

The Interface

How to think about polarization on Facebook





On Tuesday, the Wall Street Journal published a report about Facebook’s efforts to fight polarization since 2016, based on internal documents and interviews with current and former employees. Rich with detail, the report describes how Facebook researched ways to reduce the spread of divisive content on the platform, and in many cases set aside the recommendations of employees working on the problem. Here are Jeff Horwitz and Deepa Seetharaman:

“Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from a 2018 presentation. “If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.” […]

Fixing the polarization problem would be difficult, requiring Facebook to rethink some of its core products. Most notably, the project forced Facebook to consider how it prioritized “user engagement”—a metric involving time spent, likes, shares and comments that for years had been the lodestar of its system.

The first thing to say is that “polarization” can mean a lot of things, and that can make the discussion about Facebook’s contribution to the problem difficult. You can use it in a narrow sense to talk about the way that a news feed full of partisan sentiment could divide the country. But you could also use it as an umbrella term to talk about initiatives related to what Facebook and other social networks have lately taken to calling “platform integrity” — removing hate speech, for example, or labeling misinformation.

The second thing to say about “polarization” is that while it has a lot of negative effects, it’s worth thinking about what your proposed alternative to it would be. Is it national unity? One-party rule? Or just everyone being more polite to one another? The question gets at the challenge of “fighting” polarization if you’re a tech company CEO: even if you see it as an enemy, it’s not clear what metric you would rally your company around to deal with it.

Anyway, Facebook reacted to the Journal report with significant frustration. Guy Rosen, who oversees these efforts, published a blog post on Wednesday laying out some of the steps the company has taken since 2016 to fight “polarization” — here used in that umbrella-term sense of the word. The steps include shifting the News Feed to include more posts from friends and family than publishers; starting a fact-checking program; more rapidly detecting hate speech and other malicious content using machine-learning systems and an expanded content moderation workforce; and removing groups that violate Facebook policies from algorithmic recommendations.

Rosen writes:

We’ve taken a number of important steps to reduce the amount of content that could drive polarization on our platform, sometimes at the expense of revenues. This job won’t ever be complete because at the end of the day, online discourse is an extension of society and ours is highly polarized. But it is our job to reduce polarization’s impact on how people experience our products. We are committed to doing just that.

Among the reasons the company was frustrated with the story, according to an internal Workplace post I saw, is that Facebook had spent “several months” talking with the Journal reporters about their findings. The company gave them a variety of executives to speak with on and off the record, including Joel Kaplan, its vice president of global public policy, who often pops up in stories like this to complain that some action might disproportionately hurt conservatives.

In any case, there are two things I think are worth mentioning about this story and Facebook’s response to it. One is an internal tension in the way Facebook thinks about polarization. And the other is my worry that asking Facebook to solve for divisiveness could distract from the related but distinct issues around the viral promotion of conspiracies, misinformation, and hate speech.

First, that internal tension. On one hand, the initiatives Rosen describes to fight polarization are all real. Facebook has invested significantly in platform integrity over the past several years. And, as some Facebook employees told me yesterday, there are good reasons not to implement every suggestion a team brings you. Some might be less effective than other efforts that were implemented, for example, or they might have unintended negative consequences. Clearly some employees on the team feel like most of their ideas weren’t used, or were watered down, including employees I’ve spoken with myself over the years. But that’s true of a lot of teams at a lot of companies, and it doesn’t mean that all their efforts were for nought.

On the other hand, Facebook executives largely reject the idea that the platform is polarizing in the tearing-the-country-apart sense of the word. The C-suite read closely a working paper that my colleague Ezra Klein wrote about earlier this year that casts doubt on social networks’ contribution to the problem. The paper by Levi Boxell, Matthew Gentzkow, and Jesse Shapiro studies what is known as “affective polarization,” which Klein defines as “the difference between how warmly people view the political party they favor and the political party they oppose.” They found that affective polarization had increased faster in the United States than anywhere else — but that in several large, modernized nations with high internet usage, polarization was actually decreasing. Klein wrote:

One theory this lets us reject is that polarization is a byproduct of internet penetration or digital media usage. Internet usage has risen fastest in countries with falling polarization, and much of the run-up in US polarization predates digital media and is concentrated among older populations with more analogue news habits.

Klein, who published a book on the subject this year, believes that social networks contribute to polarization in other ways. But the fact that there are many large countries where Facebook usage is high and polarization is decreasing helps to explain why the issue is not top of mind for Facebook’s C-suite. As does Mark Zuckerberg’s own stated inclination against platforms making editorial judgments on speech. (Which he reiterated at a virtual shareholders’ meeting today.)

So here you have a case where Facebook can be “right” in a platform integrity sense — look at all these anti-polarization initiatives! — while the Journal is right in a larger one: Facebook has been designed as a place for open discussion, and human nature ensures that those discussions will often be heated and polarizing, and the company has chosen to take a relatively light touch in managing the debates. And it does so because executives think the world benefits from raucous, few-holds-barred discussions, and because they aren’t persuaded that those discussions are tearing countries apart.

Where Facebook can’t wriggle off the hook, I think, is in the Journal’s revelation of just how important its algorithmic choices have been in the spread of polarizing speech. Again, here the problem isn’t “polarization” in the abstract — but in concrete harms related to anti-science, conspiracy, and hate groups, which grow using Facebook’s tools. The company often suggests that its embrace of free speech has created a neutral platform, when in fact its design choices often reward division with greater distribution.

This is the part of the Journal’s report that I found most compelling:

The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”

Facebook says that extremist groups are no longer recommended. But just today, the disinformation researcher Nina Jankowicz joined an “alternative health” group on Facebook and immediately saw recommendations that she join other groups related to white supremacy, anti-vaccine activism, and QAnon.

Ultimately, despite its efforts so far, Facebook continues to unwittingly recruit followers for bad actors, who use it to spread hate speech and misinformation detrimental to the public health. The good news is that the company has teams working on those problems, and surely will develop new solutions over time. The question raised by the Journal is, when that happens, how closely their bosses will listen to them.


On Tuesday, Twitter added a link to two of President Trump’s tweets , designating them as “potentially misleading.” It took this action because Trump, as part of a disinformation campaign alleging that voting by mail will trigger massive vote fraud, was appearing to interfere with the democratic process in violation of the company’s policies.

Trump was outraged about the links, and tweeted about being censored to his 80 million followers. He threatened to shut down social media companies. He said “big action” would follow. At the direction of a White House spokeswoman, right-wing trolls began to harass Yoel Roth, Twitter’s head of site integrity, who has previously tweeted criticism of Trump. Members of Congress including Marco Rubio and Josh Hawley tweeted that Twitter’s action could not stand, and that social platforms should lose Section 230 protections for moderating speech — willfully misunderstanding Section 230 in the way that they always do. Late in the day, there was word of a forthcoming executive order, with no other details.

I could spend a lot of time here speculating about the coming battle between social networks and the Republican establishment, with Silicon Valley’s struggling efforts to moderate their unwieldy platforms going head-to-head with Republicans’ bad-faith attempts to portray them as politically biased. But the past few years have taught us that while Congress is happy to kick and scream about the failures of tech platforms, it remains loath to actually regulate them.

It’s true that we have seen some apparent retaliation from Trump against social networks — the strange fair housing suit filed against Facebook last year comes to mind. And several antitrust cases are currently underway that could result in significant action. But for the most part, as Makena Kelly writes today in The Verge, the bluster is as far as it ever really goes:

The president has never followed through on his threats and used his considerable powers to place legal limits on how these companies operate. His fights with the tech companies last just long enough to generate headlines, but flame out before they can make a meaningful policy impact. And despite the wave of conservative anger currently raining down on Twitter, there’s no reason to think this one will be any different.

Those flameouts are most tangible in the courts. On the same day as Trump’s tweets, the US Court of Appeals in Washington ruled against the nonprofit group Freedom Watch and fringe right figure Laura Loomer in a case purporting that Facebook, Google, and Twitter conspired to suppress conservative content online, according to Bloomberg. Whether it be Loomer or Rep. Tulsi Gabbard (D-HI) fighting the bias battle, the courts have yet to rule in their favor.

In fact, as former Twitter spokesman Nu Wexler noted, Trump has even less leverage over Twitter than he does over other tech companies. “Twitter don’t sell political ads, they’re not big enough for an antitrust threat, and he’s clearly hooked on the platform,” Wexler tweeted. And whatever Trump may think, as the law professor Kate Klonick noted, “The First Amendment protects Twitter from Trump. The First Amendment doesn’t protect Trump from Twitter.”

Facts and logic aside, get ready: you’re about to hear a lot more cries from people complaining that they have been censored by Twitter. And it will be all over Twitter.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending sideways: YouTube began fixing an error in its moderation system that caused comments containing certain Chinese-language phrases critical of China’s Communist Party to be automatically deleted. The company still won’t explain what caused the deletions in the first place, though some are speculating that Chinese trolls trained the YouTube algorithm to block the terms. (James Vincent / The Verge)

Trending down: Harry Sentoso, a warehouse worker in Irvine who was part of Amazon’s COVID-19 hiring spree, died after two weeks on the job. Sentoso was presumed to have the novel coronavirus after his wife tested positive. (Sam Dean / Los Angeles Times)

Virus tracker

Total cases in the US: More than 1,701,500

Total deaths in the US: At least 100,000

Reported cases in California: 100,371

Total test results (positive and negative) in California: 1,696,396

Reported cases in New York: 369,801

Total test results (positive and negative) in New York: 1,774,128

Reported cases in New Jersey: 156,628

Total test results (positive and negative) in New Jersey: 635,892

Reported cases in Illinois: 114,448

Total test results (positive and negative) in Illinois: 786,794

Data from The New York Times. Test data from The COVID Tracking Project.


Whistleblowers say Facebook failed to warn investors about illegal activity happening on its platform. A complaint filed with the Securities and Exchange Commission late Tuesday includes dozens of pages of screenshots of opioids and other drugs for sale on Facebook and Instagram, reports Nitasha Tiku at The Washington Post:

The filing is part of a campaign by the National Whistleblower Center to hold Facebook accountable for unchecked criminal activity on its properties. By petitioning the SEC, the consortium is attempting to get around a bedrock law — Section 230 of the Communications and Decency Act — that exempts Internet companies from liability for the user-generated content on their platform.

Instead, the complaint focuses on federal securities law, arguing that Facebook’s failure to tell shareholders about the extent of illegal activity on its platform is a violation of its fiduciary duty. If Facebook alienates advertisers and has to shoulder the true cost of scrubbing criminals from its social networks, it could affect investors in the company, the complaint argues.

Facebook ran a multi-year charm offensive to develop friendly relationships with powerful state prosecutors who could use their investigative powers to harm the company’s revenue growth. In the end, the strategy had mixed results: Most of those attorneys general are now investigating the company for possible antitrust violations. I never cease to be amazed how ineffective tech lobbying is, given the money that gets spent on it (Naomi Nix / Bloomberg)

A federal appeals court rejected claims that Twitter, Facebook, Apple, and Google conspired to suppress conservative views online. The decision affirmed the dismissal of a lawsuit by the nonprofit group Freedom Watch and the right-wing YouTube personality Laura Loomer, who accused the companies of violating antitrust laws and the First Amendment in a coordinated political plot. (Erik Larson / Bloomberg)

The Arizona attorney general sued Google for allegedly tracking users’ locations without permission. The case appears to hinge on whether Android menus were too confusing for the average person to navigate. (Tony Romm / Washington Post)

India’s antitrust body is looking into allegations that Google abused its market position to unfairly promote its mobile payments app. The complaint alleges Google hurt competition by prominently displaying Google Pay inside the Android app store in India. (Aditya Kalra and Aditi Shah / Reuters)

Google sent 1,755 warnings to users whose accounts were targets of government-backed attackers last month. The company highlighted new activity from “hack-for-hire” firms, many based in India, that have been creating Gmail accounts spoofing the World Health Organization. (Google)

Switzerland is now piloting a COVID-19 contact tracing app that uses the AppleGoogle framework. The app, SwissCovid, is the first to put the Apple-Google model to use. (Christine Fisher / Engadget)

Silicon Valley’s billionaire Democrats are spending tens of millions of dollars to help Joe Biden catch up to President Trump’s lead on digital campaigning. These billionaires’ arsenals are funding everything from nerdy political science experiments to divisive partisan news sites to rivalrous attempts to overhaul the party’s beleaguered data file. (Theodore Schleifer / Recode)

A war has broken out on Reddit regarding how content is moderated. The feud started when a list of “PowerMods” began circulating, with the title “92 of top 500 subreddits are controlled by just 4 people.” (David Pierce / Protocol)

Twitter’s anti-porn filters blocked the name of Boris Johnson’s chief adviser, Dominic Cummings, from trending on the platform. Cummings has dominated British news for almost a week after coming under fire for traveling across the country during the coronavirus lockdown. It’s nice to read a truly funny story about content moderation for a change. (Alex Hern / The Guardian)


TikTok’s parent ByteDance generated more than $3 billion of net profit last year. The company’s revenue more than doubled from the year before, to $17 billion, propelled by high growth in user traffic. Here are Katie Roof and Zheping Huang at Bloomberg:

The company owes much of its success to TikTok, now the online repository of choice for lip-synching and dance videos by American teens. The ambitious company is also pushing aggressively into a plethora of new arenas from gaming and search to music. ByteDance could fetch a valuation of between $150 billion and $180 billion in an initial public offering, a premium relative to sales of as much as 20% to social media giant Tencent thanks to a larger global footprint and burgeoning games business, estimated Ke Yan, Singapore-based analyst with DZT Research.

Facebook’s experimental app division has a new product out today called Collab. The app lets users create short music videos using other people’s posts, which sounds a lot like TikTok. (Nick Statt / The Verge)

Facebook’s annual shareholder meeting was held virtually on Wednesday. One item on the agenda was a call for Mark Zuckerberg to relinquish his position as chair of Facebook’s board of directors, and be replaced by an independent figure. Somehow it failed! (Rob Price / Business Insider)

Instagram will start sharing revenue with creators for the first time, through ads in IGTV and badges that viewers can purchase on Instagram Live. The company has hinted that ads would come to IGTV for more than a year, often saying the long-form video offering would be the most likely place it’d first pay creators. Any time creators are develop a direct relationship with their audience and profit from it, I get super happy. (Ashley Carman / The Verge)

Google is rolling out a series of updates aimed at helping local businesses adapt to the COVID-19 pandemic. The company is expanding a product that allows businesses to sell gift cards during the government shutdown. It’s also allowing restaurants to point to their preferred delivery partners for customers that want to order through third-party apps. (Sarah Perez / TechCrunch)

About half of remote workers in the US report feeling less connected to their company, more stressed in ways that negatively impact their work, and say they are working more hours from home. The downsides could become prominent as more companies extend remote working deadlines beyond the coronavirus pandemic. (Kim Hart / Axios)

Things to do

Stuff to occupy you online during the quarantine.

Watch HBO Max. It’s here, and it’s totally diluting the HBO brand!

Turn your Fuji camera into a high-end webcam with this new software. It works over USB.

Subscribe to a new newsletter from Google walkout organizer Claire Stapleton. Tech Support promises to offer “existential advice for today’s tech worker.”

Replace your Zoom calls with a Sims-style virtual hangout. It’s a new twist on video chat from a company called Teeoh.

Call 1-775-HOT-VINE to hear audio clips of famous Vines. I just did and it was extremely charming.

Those good tweets …

Talk to us

Send us tips, comments, questions, and polarizing Facebook posts: and

Continue Reading

The Interface

Why Twitter labeling Trump’s tweets as “potentially misleading” is a big step forward




From time to time a really bad post on a social network gets a lot of attention. Say a head of state falsely accuses a journalist of murder, or suggests that mail-in voting is illegal — those would be pretty bad posts, I think, and most people working inside and outside of the social network could probably agree on that. In my experience, though, average people and tech people tend to think very differently about what to do about a post like that. Today I want to talk about why.

When an average person sees a very bad post on a social network, they may call for it to be removed immediately. They will justify this removal on moral grounds — keeping the post up, they will say, is simply indecent. To leave it up would reflect poorly on the moral character of everyone who works at the company, especially its top executives. Some will say the executives should resign in disgrace, or possibly be arrested. Congress may begin writing letters, and new laws will be proposed, so that such a bad post never again appears on the internet.

When a tech company employee sees a really bad post, they are just as likely to be offended as the next person. And if they work on the company’s policy team, or as a moderator, they will look to the company’s terms of services. Has a rule been broken? Which one? Is it a clear-cut violation, or can the post be viewed multiple ways?

If a post is deeply offensive but not covered by an existing rule, the company may write a new one. As it does, employees will try to write the rule narrowly, so as to rule in the maximum amount of speech, while ruling out only the worst. They will try to articulate the rule clearly, so that it can be understood in every language by an army of low-paid moderators. (And who may be developing post-traumatic stress syndrome and related conditions.)

Put another way, when an average person sees a really bad post, their instinct is to react with anger. And when a tech person sees a really bad post, their instinct is to react practically.

All of that context feels necessary to understand two Twitter debates playing out today: one over what Twitter ought to do about the fact that President Trump keeps tweeting without evidence that one of the few high-profile Republicans who regularly speaks out about him, the onetime congressman and current MSNBC host Joe Scarborough, may be implicated in the 2001 death of a former staffer. And one over what to do about the president’s war on voting by absentee ballot.

As to the former: In fact, according to the medical examiner, former Scarborough aide Lori Klausutis died of a blood clot.) Now her widow is petitioning Twitter CEO Jack Dorsey to remove Trump’s tweets suggesting there may have been foul play. John Wagener wrote up the day’s events in the Washington Post:

With no evidence, Trump has continued to push a conspiracy theory that Scarborough, while a member of Congress, had an affair with his married staffer and that he may have killed her — a theory that has been debunked by news organizations including The Washington Post and that Timothy Klausutis called a “vicious lie” in his letter to Dorsey.

On Tuesday morning, Trump went on Twitter again to advocate the “opening of a Cold Case against Psycho Joe Scarborough,” which he said was “not a Donald Trump original thought.”

“So many unanswered & obvious questions, but I won’t bring them up now!” Trump added. “Law enforcement eventually will?

If you believe social networks are obligated to remove posts that are indecent, it’s clear why you would want these tweets to come down. The president is inflicting an emotional injury on an innocent, bereaved man for political gain. (Trump has historically benefitted from falsely suggesting his Republican opponents are murderers, as Jonathan Chait notes here.)

But if your job is to write or enforce policy at a tech company, your next steps are far less clear. Consider the facts. Did Trump say definitively that Scarborough committed murder? He didn’t — “maybe or maybe not,” he tweeted this morning. Did Trump incite violence against Scarborough, directly or indirectly? (Twitter has promised to hide such tweets behind a warning label, but it has never done so.) I don’t think so, and while encouraging law enforcement to investigate the case arguably represents an abuse of presidential power, our nation’s founders invested the responsibility for reining in a wayward chief executive not with private companies but with the other two branches of government.

Let’s make it more complex: Scarborough is a public figure — a former congressman, no less. Traditionally social networks have tolerated much more indecency when it comes to average people wanting to yell at the rich and powerful, and when it comes to the rich and powerful yelling at one another. And when two of those figures are engaged in political discourse — the kind of discourse that the First Amendment, which informs so many of the principles of tech company speech policies, sought to protect above all else — a tech policy person would probably want to give that speech the widest possible latitude.

I spent the day talking with former Twitter employees who worked on speech and policy issues. For the most part, they thought Trump’s Scarborough tweets should stay up. For one, the tweets don’t violate existing policy. And two, they believe you can’t design a policy that bans these tweets that doesn’t also massively chill speech across the platform. As one former employee put it to me, “If speculation about unproven crime is not allowed, I have bad news for anyone who wants to tweet about a true crime podcast.”

Now, it’s possible for me to imagine a time when Twitter would have to take action against these tweets. There was a time when Alex Jones’ tweets and videos about the Sandy Hook school shooting also fell into the realm of “speculating about true crime,” even though his conspiracy theories were almost certainly promoted in bad faith. But then Jones’ fans began stalking and harassing families of the murder victims, in some cases threatening to kill them. Eventually Jones was removed from most of the big social platforms.

If Trump continues to promote the lie about Scarborough, we can assume some of his followers will take matters into their own hands. It’s been barely a year since one of those followers was sentenced to 20 years in prison for mailing 16 pipe bombs to people he perceived to be Trump’s enemies. If something similar happens as a result of the Scarborough tweets, Twitter will face criticism for failing to act. It’s a terrible position for the company to be in.

But mostly it’s just a terrible thing for the president to do. And in a democracy we have remedies for bad behavior that go well beyond asking a tech company to de-platform a politician. You can speak your mind, you can march in the streets, and you can vote. That’s why, for most problems of political speech, my preferred solution is more speech, in the form of more votes.

Which brings us to the day’s surprising conclusion: Twitter’s decision to label, for the first time, some of Trump’s tweets as potentially misleading. Makena Kelly has the story in The Verge:

On Tuesday, Twitter labeled two tweets from President Donald Trump making false statements about mail-in voting as “potentially misleading.” It’s the first time the platform has fact-checked the president.

The label was imposed on two tweets Trump posted Tuesday morning falsely claiming that “mail-in ballots will be anything less than substantially fraudulent” and would result in “a rigged election.” The tweets focused primarily on California’s efforts to expand mail-in voting due to the novel coronavirus pandemic. On Sunday, the Republican National Committee sued California Gov. Gavin Newsom over the state’s moves to expand mail-in voting.

According to a Twitter spokesperson, the tweets “contain potentially misleading information about voting processes and have been labeled to provide additional context around mail-in ballots.” When a user sees the tweets from Trump, a link from Twitter is attached to them that says “Get the facts about mail-in ballots.” The link leads to a collection of tweets and news articles debunking the president’s statements.

This story is surprising for several reasons. It involves Twitter, a company notoriously prone to inaction, making a decisive move against its most powerful individual user. It ensures a long stretch of partisan mud-wrangling over which future tweets from which other politicians deserve similar treatment — and over whether one side or another is being punished disproportionately. And it puts Twitter prominently in the position it has long sought to avoid — “the arbiter of truth,” chiming in when the president lies to say that no, actually, it’s legal to vote by absentee ballot.

And yet at the same time, Twitter’s decision was rooted in principle. In January Twitter began allowing users to flag tweets that contain misleading information about how to vote. Today it applied that policy, fairly and with relative precision. Some have criticized the design and wording of the actual label — “Get the facts about mail-in ballots” doesn’t exactly scream “the president is lying about this.” But it still feels like a step forward, and not a small one.

Social networks that reach global scale will always suffer from really bad posts, some of them posted by their most prominent users. And it’s precisely because those platforms have become so important to political speech that I would rather decisions about what stays up and what comes down not be dictated by the whims of unelected, unaccountable founders.

Twitter’s decision to leave up some of Trump’s awful tweets and label others as misleading won’t fully satisfy anyone. But in my view this is a case where the company has made some hard decisions in a relatively judicious way. And anyone who tries to write a better, more consistent policy — one that goes beyond “this is indecent, take it down” — will find that it’s much harder than it looks.

The Ratio

Today in news that could affect public perception of the big tech platforms.

⬆️Trending up: Facebook announced new features for Messenger that will alert users about messages that appear to come from financial scammers or child abusers. The company said the detection will occur only based on metadata—not analysis of the content of messages—so that it doesn’t undermine end-to-end encryption. (Andy Greenberg / Wired)

⬇️Trending down: YouTube deleted comments with two phrases that insult the Chinese Communist party. The company said it was an error. (James Vincent / The Verge)

⬇️Trending down: Amazon supplied local TV news stations with a propaganda reel intended to change the subject from deaths and illnesses at its distribution centers. At least 11 stations aired it, and this video lets you watch various news anchors robotically parrot the PR talking points. (Nick Statt / The Verge)

Virus tracker

Total cases in the US: More than 1,685,800

Total deaths in the US: At least 98,800

Reported cases in California: 99,547

Total test results (positive and negative) in California: 1,696,396

Reported cases in New York: 368,669

Total test results (positive and negative) in New York: 1,774,128

Reported cases in New Jersey: 155,764

Total test results (positive and negative) in New Jersey: 635,892

Reported cases in Illinois: 113,402

Total test results (positive and negative) in Illinois: 786,794

Data from The New York Times. Test data from The COVID Tracking Project.


Facebook spent years studying how the platform polarized people, according to sources and internal documents. One slide from a 2018 presentation read ”our algorithms exploit the human brain’s attraction to divisiveness.” Here are Jeff Horwitz and Deepa Seetharaman from the Wall Street Journal:

Facebook had kicked off an internal effort to understand how its platform shaped user behavior and how the company might address potential harms. Chief Executive Mark Zuckerberg had in public and private expressed concern about “sensationalism and polarization.”

But in the end, Facebook’s interest was fleeting. Mr. Zuckerberg and other senior executives largely shelved the basic research, according to previously unreported internal documents and people familiar with the effort, and weakened or blocked efforts to apply its conclusions to Facebook products.

Facebook policy chief Joel Kaplan, who played a central role in vetting proposed changes, argued at the time that efforts to make conversations on the platform more civil were “paternalistic,” said people familiar with his comments.

President Trump is considering creating a panel to review complaints of anticonservative bias on social media. Facebook, Twitter, and Google all pushed back against the proposed panel, denying any anticonservative bias. I imagine today’s action from Twitter will come up, if this thing turns out to be real. (John D. McKinnon and Alex Leary / The Wall Street Journal)

Doctors with verified accounts on Facebook are spreading coronavirus misinformation. The company has been trying to crack down on misinformation about virus, but the accounts are still able to reach hundreds of thousands of people regularly. (Rob Price / Business Insider)

Here’s a guide to the most notorious spin doctors and conspiracy theorists spreading misinformation about the coronavirus pandemic. (Jane Lytvynenko, Ryan Broderick and Craig Silverman / BuzzFeed)

Influencers say Instagram is biased against plus-sized bodies — and they might be right. Content moderation on social media is usually a mix of artificial intelligence and human moderators, and both methods have a potential bias against larger bodies. (Lauren Strapagiel / BuzzFeed)

Joe Biden’s digital team is trying to raise his online profile prior to the 2020 election while understanding his limitations on social media. Which is another way of saying he’s still not on TikTok. (Sam Stein / Daily Beast)

Democrats are introducing a new bill that would tighten restrictions on online political ad-targeting on platforms like Facebook. The Protecting Democracy from Disinformation Act would limit political advertisers to targeting users based only on age, gender and location — a move intended to crack down on microtargeting. (Cristiano Lima / Politico)

Two new laws in Puerto Rico make it a crime to report information about emergencies that the government considers “fake news.” The ACLU filed a lawsuit on behalf of two Puerto Rican journalists who fear the laws will be used to punish them for their reporting on the coronavirus crisis. (Sara Fischer / Axios)

One of the first contact-tracing apps in the US, North and South Dakota’s Care19, violates its own privacy policy by sharing location data with an outside company. The oversight suggests that state officials and Apple, both of which were responsible for vetting the app before it became available April 7th, were asleep at the wheel. (Geoffrey A. Fowler / The Washington Post)

China’s virus-tracking apps have been collecting information, including location data, on people in hundreds of cities across the country. But the authorities have set few limits on how that data can be used. And now, officials in some places are loading their apps with new features, hoping the software will live on as more than just an emergency measure. (Raymond Zhong / The New York Times)

Serious security vulnerabilities were discovered in Qatar’s mandatory contact tracing app. The security flaw, which has now been fixed, would have allowed bad actors to access highly sensitive personal information, including the name, national ID, health status and location data of more than one million users. (Amnesty International)

Inside the NSA’s secret tool for mapping your social network. Edward Snowden revealed the agency’s phone-record tracking program. But the database was much more powerful than anyone knew. (Barton Gellman / Wired)

Silicon Valley’s main data-protection watchdog in Europe came under attack for taking too long to wrap up probes into Facebook, Instagram and WhatsApp. The group has yet to issue any significant fines two years after the EU empowered it to levy hefty penalties for privacy violations. (Stephanie Bodoni / Bloomberg)

A court in the Netherlands is forcing a grandmother to delete photos of her grandkids that she posted on Facebook and Pinterest without their parents’ permission. The judge ruled the matter was within the scope of the EU’s General Data Protection Regulation. (BBC)


⭐Shopping for Instacart is dangerous during the pandemic. Now, workers who’ve gotten sick say they haven’t been able to get the quarantine pay they were promised. Russell Brandom at The Verge has the story:

It’s a common story. On forums and in Facebook groups, Instacart’s sick pay has become a kind of sour joke. There are lots of posts asking how to apply, but no one seems to think they’ll actually get the money. The Verge spoke to eight different workers who were placed under quarantine — each one falling prey to a different technicality. A worker based in Buffalo was quarantined by doctors in March but didn’t qualify for an official test, leaving him with no verification to send to reps. In western Illinois, a man received a quarantine order from the state health department, but without a test, he couldn’t break through. Others simply fell through the cracks, too discouraged to fight the claim for the weeks it would likely take to break through.

Amazon lost some online shoppers to rivals during the pandemic as it struggled to keep up with demand. Now the retail giant is turning back to faster shipping times and big sales to lure people back to the platform. (Karen Weise / The New York Times)

Google said the majority of its employees will work from home through 2020. It’s giving everyone $1,000 to cover any new work-from-home expenses. (Chaim Gartenberg / The Verge)

Welcome to the age of the TikTok cult. These aren’t the ideological cults most people are familiar with. Instead, they are open fandoms revolving around a single creator. Right now they’re being weaponized to perform social-media pranks, but it feels like something much darker is around the corner. (Taylor Lorenz / The New York Times)

Zoom temporarily removed Giphy from its chat feature, days after Facebook acquired the GIF platform for $300 million. “Once additional technical and security measures have been deployed, we will re-enable the feature” the company said.

Facebook renamed Calibra, the digital wallet it hopes will one day be used to access the Libra digital currencies, to “Novi.” The company said that the new name was inspired by the Latin words “novus” and “via,” which mean “new” and “way” — and not, as I had assumed, the English words “non” and “viable.” (Jon Porter / The Verge)

Facebook’s internal R&D group launched a new app called CatchUp that makes it easier for friends and family in the US to coordinate phone calls with up to 8 people. I do not get this one at all. (Sarah Perez / TechCrunch)

Coronavirus may have saved Facebook from its fate as a chatroom for old people, this piece argues. There are early signs that young people are returning to the service. (Jael Goldfine / Paper)

Facebook’s Menlo Park headquarters have shaped the city. So too would an exodus of employees now that the company is shifting to remote work. (Sarah Emerson / OneZero)

Things to do

Stuff to occupy you online during the quarantine.

Listen to Boom / Bust: The Rise and Fall of HQ Trivia. It’s a fun new podcast from The Ringer about the company’s dramatic history; I appear on episode two.

Watch all of Fraggle Rock on Apple TV+. One of my favorite childhood shows finally has a streaming home.

Check out the launch lineup for HBO Max, which premieres Wednesday. If you already subscribe to HBO Now, as I do, you’re about go get a lot more movies and TV shows for the price.

Subscribe to Alex Kantrowitz’s new newsletter about big tech. One of my favorite reporters, Alex announced today he’s leaving BuzzFeed to go independent. You can sign up to get his new project via email here.

And finally…

Talk to us

Send us tips, comments, questions, and YouTube comments critical of the Chinese Communist party: and

Continue Reading

The Interface

How Facebook’s past acquisitions could haunt its purchase of Giphy




On Friday, Facebook made its fifth-largest known acquisition ever. The company bought Giphy, a database and search engine for the short looping videos known as GIFs, for $400 million. Today let’s talk about some of the reasons, stated and unstated, that Facebook bought Giphy, and then consider what might come next.

The stated reason for acquiring Giphy, as expressed in this blog post from Instagram’s head of product announcing the deal, is twofold. One, Facebook can now build tighter integrations between the products to enhance stickers, stories, and other products. And two, it can make further investments in Giphy’s technology and content library to benefit all the companies that rely on Giphy for GIF supply. Here’s Vishal Shah:

People will still be able to upload GIFs; developers and API partners will continue to have the same access to GIPHY’s APIs; and GIPHY’s creative community will still be able to create great content.

The two companies began talking before the pandemic, I’m told, to explore some sort of expanded partnership. More than half of the GIFs sent through Giphy land on Facebook-owned apps, and half of those land on Instagram specifically. So it’s natural that the two companies would be in regular conversation.

The problem for Giphy is that its business wasn’t working. The 7-year-old company, which had raised $150.9 million, had developed a convoluted advertising model in which it would host GIFs for brands and let them pay to promote them in conversation. That generated some level of experimental revenue from advertisers, but the product failed to take off. Giphy claimed 700 million daily users. Two people close to the deal told me it likely would have gone out of business had it not been acquired, and Instagram chief Adam Mosseri tweeted that Giphy “needed a home.” (He said a bit more to Sara Fischer.)

At the same time, GIFs are a core part of any social app, and Giphy had already built the largest independent GIF library. (Google acquired the other big player, Tenor, in 2018.) There’s obvious strategic value to Facebook in acquiring a tool that is fundamental to the way that people express themselves online. A Giphy ad deck from last spring that someone sent me reported that the company served 7 billion GIFs per day, and so without Giphy in the world Facebook would have to find another way to source 3.5 billion daily GIFs.

Better yet, from Facebook’s perspective, Giphy was available at a discount. The app had last raised funding in 2016 at a valuation of $600 million, and the combination of a failing ad business and pandemic-related uncertainty had given the company a 33 percent haircut. The deal is still large by Facebook standards, though, suggesting that other players may have been competing for it. Giphy integrates with Apple’s iMessage, ByteDance’s TikTok, Slack, Snapchat, and Twitter, among many others, and it’s not hard to imagine any of them putting in an offer. (That said, I imagine $400 million was too steep a price tag for most of them.)

For all these reasons, few scoffed when Facebook announced its purchase. But given the company’s history of brilliant, pricey strategic purchases, there was a sense over the weekend that some greater game must be unfolding. To me it seemed like shrewd dealmaking during troubled times — buy a useful thing for cheap — but I also suspected that there might have been a more anticompetitive motive in play. Sarah Frier explored this question in Bloomberg:

Giphy provides the same search service to many of Facebook’s competitors, Apple Inc.’s iMessage, Twitter, Signal, TikTok and others. The company has a view of the health of those platforms and how often people use them, which is exactly the kind of insight Facebook values most, and has sought in the past. After Giphy joins Facebook, the company will maintain those integrations, and will keep getting data from GIF searches and posts around the internet. […]

Since Facebook doesn’t own a mobile phone operating system like iOS or Android, it has relied on other means to understand competitors’ strengths — sometimes getting in trouble in the process. In 2013, for instance, Facebook acquired Onavo, an Israeli company that made a VPN, a tool to keep online activity private. Just not from Facebook, which analyzed the data to see which apps were getting popular, and then came up with ways to compete with or purchase them. Apple in 2018 banned the Onavo app, declaring that the data collection violated its app store rules.

Mosseri denied this, as did other Facebook executives I spoke with over the weekend. While it’s tempting to imagine Facebook building a sequel to Onavo as an early-warning system for potential threats, at most Giphy would be redundant in this regard. When TikTok arose as a threat in 2018, Facebook could tell because the company was spending $1 billion on ads — many of them Facebook ads. And when smaller threats emerge, Facebook can tell because people post about them … on Facebook.

If a new social app arose that used a Giphy integration, and Facebook could see that it was serving them exponentially more GIFs month after month, that could potentially be useful to the company. But it seems unlikely, given all the other data at Facebook’s fingertips, that it would be all that surprising.

There’s a secondary data question, though, and it’s how all of Giphy’s partners feel about suddenly becoming Facebook customers. An important question is whether Facebook will receive data about individual consumer behavior through Giphy; the answer seems to be no. Ben Thompson, who beat me to many of these points in his newsletter today, explains how (and has a fascinating aside at the end):

The GIPHY API, on the other hand, which allows for a custom-built integration, has no such requirement, and Signal explained in 2017 how GIPHY’s service can be proxied to hide all user data. Slack has already said that they proxy GIPHY in the same way, and I strongly suspect that Twitter and Apple do the same. That means that Facebook can get total usage data from these apps, but not individual user data (and as further evidence that this sort of proxying is effective, Facebook-owned WhatsApp actually uses Google’s Tenor service; I highly doubt Facebook would have tolerated that to-date if Google were getting per-user data).

Meanwhile at The Verge, Jay Peters asks Giphy’s most high-profile partners what they make of the deal, and they responded in two ways: either saying that they had been hiding user data from Giphy, or declining to comment at all. Ultimately, these partners are going to vote with their products. If they come to view Giphy as a data giveaway to Facebook, they’re likely to find alternatives. But if Apple and Snap remain Giphy customers, perhaps skepticism of the deal will subside. (I wouldn’t count on it.)

Among the current skeptics are some members of Congress. Here’s Makena Kelly in The Verge:

In statements Friday, Republican Sen. Josh Hawley (R-MO) and Democrats Sens. Elizabeth Warren (D-MA) and Amy Klobuchar (D-MN) were skeptical of the deal.

“Facebook keeps looking for even more ways to take our data,” Hawley said in a statement to The Verge. “Just like Google purchased DoubleClick because of its widespread presence on the internet and ability to collect data, Facebook wants Giphy so it can collect even more data on us. Facebook shouldn’t be acquiring any companies while it is under antitrust investigation for its past purchases.”

There’s something darkly funny, to me anyway, about Giphy being the Facebook acquisition that rouses Congressional antitrust hawks from their multi-decade slumber party. Is Congress going to assert that Facebook now has a GIF monopoly? What are the barriers to entry to creating GIFs, exactly? We desperately need Congress to enforce antitrust when it comes to social networks acquiring other social networks, but social networks acquiring floundering content libraries seems like it ought to remain permissible.

But you know what they say about generals fighting the last war. Knowing what they know now, it seems likely that Congress would not today approve Facebook buying Instagram or WhatsApp. It would be incredible if, so many years after those purchases, it wound up being Giphy that paid the price for those failures.


On Thursday I wrote about the idea that COVID-19 is making Silicon Valley a less attractive place to live, noting that there are already stories about some tech workers heading for cheaper pastures. I got great responses from folks working at Facebook, Google, and Twitter, among other places, reflecting a wide range of views. Two, I think, are especially worth calling out. One is that employees’ arbitrage scheme might be less effective than they hope, because companies already know that other places are cheaper to live and will lower their pay accordingly once they move. (I’m told that Google and Facebook already do this.)

The other is that fleeing to the wilderness is really only a good option for people who are partnered up and well established in their careers. Younger and more junior employees benefit immensely from city life and office life. As one younger tech worker told me:

Young people need to learn social working skills in their first job (and it won’t happen over Zoom). Young people need to gossip about coworkers. Young people need to date. Young people need to be around other people. My colleagues who are making decisions based on these policies are — by and large — married with kids. I think that’s expected! and I think it’s really good for them. But the specific effects of that — employers’ most senior employees moving offsite — could have a drastic impact on the future of how these companies operate, and I don’t think I’ve seen that well represented.

It’s a great point. In any case, I do think Silicon Valley will survive. As another tech worker put it to me in response to the newsletter: “The Bay Area (and all of California) has been desirable and expensive for over 100 years. It’s so self-centered for the tech community to think they are the reason why SF is desirable and that if they leave, the city will collapse.”

In any case, I’m not going anywhere.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending up: Instagram launched a series of wellness Guides to help people during the pandemic. Creators will now have the ability to connect with credible organizations to share resources on managing grief and anxiety, among other things. (Instagram)

Virus tracker

Total cases in the US: More than 1,503,600

Total deaths in the US: At least 89,800

Reported cases in California: 80,803

Total test results (positive and negative) in California: 1,292,672

Reported cases in New York: 355,037

Total test results (positive and negative) in New York: 1,439,557

Reported cases in New Jersey: 148,039

Total test results (positive and negative) in New Jersey: 505,569

Reported cases in Illinois: 94,362

Total test results (positive and negative) in Illinois: 603,241

Data from The New York Times. Test data from The COVID Tracking Project.


The coronavirus pandemic has prompted Mark Zuckerberg to take a more hands-on approach to running Facebook, a shift that started in 2016. In the process, COO Sheryl Sandberg has seen her role diminish, report Mike Isaac, Sheera Frenkel and Cecilia Kang of The New York Times:

Now, the coronavirus has presented Mr. Zuckerberg with the opportunity to demonstrate that he has grown into his responsibilities as a leader — a 180-degree turn from the aloof days of 2016. It’s given him the chance to lead 50,000 employees through a crisis that, for once, is not of their own making. And seizing the moment might allow Mr. Zuckerberg to prove a thesis that he truly believes: That if one sees past its capacity for destruction, Facebook can be a force for good.

Facebook hired a former deputy prime minister of the UK to fix its reputation and governance. “Since arriving, Clegg has ushered into existence the company’s external oversight board, helped shepherd Zuckerberg’s most significant policy speech to date and defended the company’s controversial policies on political speech. And this year, Clegg has been intimately involved in shaping the company’s coronavirus response, in particular working with dozens of governments around the world to figure out what role the social network can and should play in the pandemic—not retreating, but leaning into its role in society and even politics.” (Nancy Scola / Politico)

The Supreme Court rejected a lawsuit against Facebook for allegedly provided “material support” to terrorists by hosting their content. The case, Force v. Facebook, was brought by the families of five Americans who were hurt or killed by Palestinian attacks in Israel. An important Section 230 case. (Adi Robertson / The Verge)

Facebook’s hesitancy to wade deep into the waters of fact checking is based on the fear that debunking a bogus claim could make the lie grow stronger. But whatever the company thinks about the backfire effect, this phenomenon has not been demonstrated in any convincing way. (Ethan Porter / Wired)

Anti-vaxxers on Instagram are fueling coronavirus conspiracy theories. The company’s efforts to curb health misinformation have done little to stem the flow of conspiracy theories related to COVID-19. (Karissa Bell / Engadget)

India’s antitrust watchdog is looking into allegations that WhatsApp engaged in anticompetitive behavior. The complaint says the company bundled its digital payment feature within its messaging app, allowing it to abuse its market position and penetrate India’s booming digital payments market. (Aditi Shah and Aditya Kalra / Reuters)

Attorney General William Barr voiced his frustration with Apple for failing to help the US government unlock the Pensacola shooter’s iPhone. He said voters and Congress should make encryption decisions — not tech companies. (Chris Welch / The Verge)

The pandemic has intensified the fight between Amazon and labor. But despite several clashes in Europe, labor activism hasn’t stopped the company from dominating online retail. (Liz Alderman and Adam Satariano / The New York Times)

Also: Amazon is planning to gradually reopen its French warehouses starting on May 19th. The company is finalizing an agreement with unions to end a dispute over coronavirus protection steps that closed the sites for more than a month. (Reuters)

A seventh Amazon employee has died of COVID-19. The company still refuses to say how many workers are sick. (Josh Dzieza / The Verge)

A court in Texas is holding the first known jury trial by Zoom. The news comes as court systems across the country face a choice between postponing trials until the pandemic ends or holding remote proceedings. (Zoe Schiffer / The Verge)

Doctors are tweeting about coronavirus to make facts go viral and combat misinformation on the platform. Bob Wachter, the chairman of the department of medicine at UCSF, sets aside two hours a day to tweet updates about the virus. (Georgia Wells / The Wall Street Journal)

The United States is amassing an army of contact tracers to contain the COVID-19 outbreak. But high caseloads, low testing, and American attitudes toward authority are likely to pose serious challenges. (James Temple / MIT Technology Review)

Germany and Australia have opted for two very different approaches to contact tracing. Australia will store user data on a central server, while Germany is going with a decentralized approach. (Amrita Khalid / Quartz)

A major question about Europe’s coronavirus contact-tracing apps is whether they will work when citizens of one country travel to another. As borders begin to reopen, the question will get even more pressing. (Natasha Lomas / TechCrunch)


Disney’s top streaming executive, Kevin Mayer, resigned on Monday to become CEO of TikTok. Mayer, who was once seen as Disney’s CEO in waiting, will now serve as COO of ByteDance, TikTok’s Chinese parent company. It’s a huge get for TikTok. Here’s Brooks Barnes from The New York Times:

Mr. Mayer’s departure from Disney is not entirely a surprise. Disney’s board of directors passed over him earlier this year when it was looking for a successor for Robert A. Iger, who abruptly stepped down in February. (Mr. Iger remains executive chairman, with a focus on the creative process.) Many people in Hollywood and on Wall Street had viewed Mr. Mayer, 58, as the logical internal candidate because the future of Disney rests on its ability to transform itself into a streaming titan. The top job, however, went to Bob Chapek, the lower-profile chairman of Disney’s theme parks and consumer products businesses.

Square told employees they can work from home forever, following a similar announcement from Twitter last week. While many tech companies have extended remote work timelines due to COVID-19, only Jack Dorsey’s organizations have made the switch permanent. (Zoe Schiffer / The Verge)

People have now spent $100 million on Oculus Quest virtual reality content. And Portal sales are up 10 times year over year, Andrew “Boz” Bosworth says in this interview. (Janko Roettgers / Protocol)

Oculus says Quest is starting a VR revolution. But plenty of unanswered questions remain about the technology’s prospects for making the leap from nerd caves to living rooms. (Seth Schiesel / Protocol)

TikTok houses all kind of look the same. That’s likely not an accident — moderators were told to hide videos with environments that were “shabby and dilapidated,” with “crack[s] in the wall” or “old and disreputable decorations.” (Emma Alpern / Curbed)

Clubhouse raised Series A funding from Andreessen Horowitz, bringing the company’s valuation to just above $100 million. Clubhouse, a voice-based social media app with fewer than 5,000 beta users, is now one of the most richly valued pieces of beta software ever. (Alex Konrad / Forbes)

Discord is also in talks to raise funding from investors. Business is booming because of stay-at-home orders amid the coronavirus pandemic. (Gillian Tan and Katie Roof / Bloomberg)

Singapore state investor Temasek joined the Libra Association. (Saheli Roy Choudhury / CNBC)

Google Meet has been downloaded 50 million times, up from five million at the beginning of March. The app has received a significant boost as people continue to shelter in place. (Hagop Kavafian / Android Police)

Things to do

Stuff to occupy you online during the quarantine.

Watch the BBC together with your friends. BBC Together is like Netflix’s similar Watch Party, but for British TV.

Read 14 ways that people are finding joy during the pandemic, including taking butt selfies.

Listen to me talk about various aspects of tech and the pandemic with iHeartRadio’s Daily Dive, Kara Swisher and her son Louie, and my friends on The Vergecast. I also spoke with Australia’s Late Night Live show about Facebook’s preliminary settlement with content moderators.

Those good tweets

Talk to us

Send us tips, comments, questions, and GIFs: and

Continue Reading