Connect with us

The Interface

What Zoom doesn’t understand about the Zoom backlash

Published

on

[Ed. be aware: At the moment’s e-newsletter and column was written and distributed earlier than Zoom CEO Eric S. Yuan printed his 1,300-word plan to deal with the safety and privateness points associated to the corporate’s unprecedented client development. What follows is unedited as a result of e-mail is without end.]

Simply in time for one backlash towards the expertise business to finish — or at the least pause — a contemporary set of considerations has arrived to occupy our consideration. Zoom, the once-obscure enterprise video chat app firm, rocketed to prominence as COVID-19 pressured tens of hundreds of thousands of People — and most of Silicon Valley — to start working, education, and socializing at dwelling. Like numerous folks, I’m now on Zoom for a number of hours a day. However with all that new utilization comes heightened scrutiny — and within the first weeks of the Nice Social Distancing, Zoom has repeatedly come up quick.

The primary downside was the Zoombombings. I don’t know if I used to be the primary sufferer of this, however I used to be definitely one in all them. My pal Hunter and I began a digital blissful hour a number of weeks in the past, and after we tweeted the hyperlinks, some trolls saved stopping by to take over our screens and share porn. We rapidly realized the way to repair the issue, however Zoombombings continue each day. The FBI is looking into it, and so is the New York attorney general’s office. The issue is that Zoom permits individuals who have joined your name to share their very own screens by default, and the controls for altering this setting are troublesome to seek out.

The second downside was that Zoom started to generate directories of each e-mail tackle that signed right into a name after which let strangers begin inserting video calls to at least one one other. As with display sharing disabled by default, this was arguably a characteristic that made sense for intra-company chats however not for broadcast. Joseph Cox had the story at Vice:

The difficulty lies in Zoom’s “Firm Listing” setting, which routinely provides different folks to a person’s lists of contacts in the event that they signed up with an e-mail tackle that shares the identical area. This may make it simpler to discover a particular colleague to name when the area belongs to a person firm. However a number of Zoom customers say they signed up with private e-mail addresses, and Zoom pooled them along with hundreds of different folks as if all of them labored for a similar firm, exposing their private data to at least one one other.

”I used to be shocked by this! I subscribed (with an alias, fortuitously) and I noticed 995 folks unknown to me with their names, photos and mail addresses.” Barend Gehrels, a Zoom person impacted by the problem and who flagged it to Motherboard, wrote in an e-mail.

The third downside was that Zoom ran round telling everybody that its platform is “end-to-end encrypted,” when actually it had redefined “end-to-end encryption” with out telling anybody. Micah Lee and Yael Grauer had the story in The Intercept:

So long as you ensure that everybody in a Zoom assembly connects utilizing “pc audio” as an alternative of calling in on a cellphone, the assembly is secured with end-to-end encryption, at the least in accordance with Zoom’s web site, its safety white paper, and the person interface inside the app. However regardless of this deceptive advertising and marketing, the service really doesn’t assist end-to-end encryption for video and audio content material, at the least because the time period is often understood. As a substitute it presents what’s often referred to as transport encryption, defined additional under. […]

The encryption that Zoom makes use of to guard conferences is TLS, the identical expertise that internet servers use to safe HTTPS web sites. Which means that the connection between the Zoom app operating on a person’s pc or cellphone and Zoom’s server is encrypted in the identical approach the connection between your internet browser and this text (on https://theintercept.com) is encrypted. This is called transport encryption, which is totally different from end-to-end encryption as a result of the Zoom service itself can entry the unencrypted video and audio content material of Zoom conferences. So when you could have a Zoom assembly, the video and audio content material will keep personal from anybody spying in your Wi-Fi, but it surely gained’t keep personal from the corporate. (In a press release, Zoom mentioned it doesn’t instantly entry, mine, or promote person knowledge.)

There are different issues. Like, it seems Zoom evades MacOS administrator controls to install itself without you having to ask your boss for permission. And there’s a way to steal someone’s Windows credentials over Zoom by sharing hyperlinks, though arguably that’s extra of a Home windows downside than a Zoom downside. To spherical out the record, a security researcher on Wednesday found two additional ways to exploit Zoom and wrote about them on his weblog.

At this level, it’s possible you’ll be questioning what Zoom has to say about all this. Over at Protocol, David Pierce talks to Zoom’s chief advertising and marketing officer, Janine Pelosi, in regards to the previous few weeks. He writes:

“The product wasn’t designed for shoppers,” Zoom CMO Janine Pelosi instructed me, “however a complete lot of shoppers are utilizing it.” That’s pressured Zoom to guage lots in regards to the platform, however particularly its default privateness settings.

On the floor, this sounds affordable. Zoom is a enterprise instrument, but it surely’s now getting used exterior of companies, and so new vulnerabilities have emerged. And but that argument is challenged by all the issues above, which mainly resolve to this: with the intention to make a well-liked video chat app, you must make it extraordinarily straightforward to make use of.

In different phrases, you must make it a client app.

Within the previous days — the 1990s, mainly — the instruments you used for work had been determined by your office. They purchased you your pc, and your license for Microsoft Workplace, and no matter different arcane and usually awful-to-use packages you wanted to get your job accomplished.

That each one modified as soon as folks acquired cell phones and will start utilizing whichever packages they needed to. A brand new class of productiveness instruments arose emphasizing design and ease of use: Google Docs, Field, Dropbox, and Evernote led the best way, with Trello, Asana, and Slack following a number of years afterward. These had been instruments constructed for work, however they had been designed for shoppers. It’s why they succeeded.

Zoom realized that lesson, and has utilized it persistently since its founding in 2011. Designing for shoppers is why, for instance, Zoom goes to such nice lengths to put in itself in your Mac with out you having to get permission from an admin. Designing for shoppers is why Zoom tries to generate an organization director in your behalf. Designing for shoppers is why Zoom permits you to log in with Fb. (One thing else it got in trouble forperhaps wrongly — this week.)

And to be clear, designing for shoppers has been a good selection for Zoom. It helped the corporate develop a lot sooner than the competitors — most notably Skype, which appears to have been caught flat-footed by the second. Zoom has a lot momentum at this second that creating digital backgrounds on your calls — a enjoyable and distinctive and extraordinarily consumer-y characteristic of the product — has out of the blue turn out to be a key advertising and marketing platform for Hollywood.

Shopper-grade ease of use is crucial for a instrument like Zoom — however so is enterprise-grade safety. That’s what its enterprise prospects are paying for, in any case, and it’s why Zoom goes to have to begin shoring up its platform in a rush. Ben Thompson has a good idea for stopping the Zoomlash in its tracks:

Freeze characteristic improvement and spend the following 30 days on a top-to-bottom evaluation of Zoom’s strategy to safety and privateness, adopted by an replace of how the corporate is re-allocating assets based mostly on that evaluation.

That gained’t cease the occasional zero-day exploit from popping up. However it might go a good distance towards demonstrating that the corporate understands the stakes of our new world and is ready to behave accordingly. Zoom’s downside has by no means been that, as its chief advertising and marketing officer says, “it wasn’t designed for shoppers.” The issue is that it was.

The Ratio

At the moment in information that would have an effect on public notion of the massive tech platforms.

Trending up: Google is partnering with California lawmakers to give out 4,000 Chromebooks to students in need in California. It’s additionally offering free wifi to 100,000 rural households in the course of the coronavirus pandemic to make distant studying extra accessible.

Trending sideways: Facebook, Twitter, and YouTube are adopting stricter policies to limit coronavirus scams and stop misinformation on the platforms. However folks maintain posting issues that clearly violate the principles. The scenario underscores how the businesses are engaged in an infinite recreation of whack-a-mole that’s robust to win.

Pandemic

Amazon employees at a success middle close to Detroit, Michigan, plan to stroll out over the corporate’s dealing with of COVID-19. Employees say administration was sluggish to inform them about new coronavirus circumstances and didn’t present sufficient cleansing provides. (Josh Dzieza / The Verge)

Amazon ignored social distancing guidelines at recruiting events as it races to hire 100,000 new workers. The corporate has since begun making the occasions digital. (Spencer Soper and Matt Day / Bloomberg)

Palantir is in talks with France, Germany, Austria and Switzerland about using its software to help them respond to COVID-19. The information-analytics agency says its expertise can do the whole lot from serving to to hint the unfold of the virus to permitting hospitals to foretell employees and provide shortages. (Helene Fouquet and Albertina Torsoli / Bloomberg)

Palantir is also behind a new tool being used by the Centers for Disease Control (CDC) to monitor how the coronavirus is spreading. The instrument may also assist the CDC perceive how effectively geared up hospitals are to cope with a spike in circumstances. (Thomas Brewster / Forbes)

A group of European experts are preparing to launch an initiative to trace peoples’ smartphones to see who has come into contact with those who have COVID-19. The objective is to assist well being authorities act swiftly to cease the unfold of the virus in a approach that’s compliant with the Common Knowledge Safety Regulation. (Douglas Busvine / Reuters)

School closures are leading to a new wave of student surveillance. Faculties are racing to signal offers with on-line proctor corporations that watch college students via their webcams whereas they take exams. (Drew Harwell / The Washington Put up)

Facebook is expanding its Community Help feature as part of the company’s COVID-19 efforts. The brand new COVID-19 Group Assist hub will permit folks to request or supply assist to these impacted by the coronavirus outbreak. (Sarah Perez / TechCrunch)

Here’s how Sheryl Sandberg is dealing with the coronavirus pandemic. She’s quarantining at dwelling together with her fiance and children and elevating hundreds of thousands for her native meals financial institution. (Alyson Shontell / Enterprise Insider)

Coronavirus is forcing {couples} to cancel their weddings, however some persons are getting inventive and live-streaming their nuptials on Zoom. (Zoe Schiffer / The Verge)

Doctors are turning to Twitter and TikTok to share coronavirus news. They’re making an attempt to fight the dangerous medical recommendation that’s circulating across the large platforms. (Kaya Yurieff / CNN)

A Chinese diplomat has been helping to spread a conspiracy theory that the United States and its military could be behind the coronavirus outbreak. Right here’s how that hoax began. (Vanessa Molter and Graham Webster / Stanford Web Observatory)

The coronavirus pandemic shows why Comcast could get rid of its data caps permanently without killing its business. (Jon Brodkin / Ars Technica)

Hackers are taking advantage of the coronavirus pandemic to launch cyberattacks against healthcare providers. In a single occasion, the criminals used encryption to lock down hundreds of the corporate’s affected person data and promised to publish them on-line if a ransom wasn’t paid. (Ryan Gallagher / Bloomberg)

Startups are desperately fighting to survive the coronavirus pandemic. Some are shedding employees and slashing prices — however even that may not be sufficient. (Erin Griffith / The New York Instances)

Americans streamed 85 percent more minutes of video in March 2020 compared to March 2019. Binge watching on Hulu has grown greater than 25 p.c previously two weeks alone. (Sara Fischer / Axios)

Snap says video calling is up 50 percent month over month. This weblog publish about how utilization has modified with the coronavirus pandemic is the form of check-in I’ve been asking for from large tech corporations.

Rebecca Jennings invites you to post with abandon. She says the digital world is now a far happier place than the actual world, which is an ideal excuse so that you can spend time on social media doing varied Instagram and TikTok challenges. (Rebecca Jennings / Vox)

Virus tracker

Complete circumstances within the US: 205,172

Complete deaths within the US: A minimum of 4,500

Reported circumstances in California: 8,582

Reported circumstances in New York: 83,760

Reported circumstances in Washington: 5,292

Data from The New York Times.

Governing

Democrats are worried that Google’s ban against most ads related to COVID-19, from nongovernmental organizations, could help Trump get re-elected. They are saying it permits the President to run advertisements selling his response to the disaster whereas denying Democrats the prospect to run advertisements criticizing this response. Emily Birnbaum at Protocol reviews:

Distinguished Democratic PACs in latest days have funneled hundreds of thousands of {dollars} into tv advertisements accusing Trump of mishandling the coronavirus disaster. However staffers of a number of Democratic nonprofits and digital advert corporations realized this week that they’d not be capable to use Google’s dominant advert instruments to unfold true details about President Trump’s dealing with of the outbreak on YouTube and different Google platforms. The corporate solely permits PSA-style advertisements from authorities businesses just like the Facilities for Illness Management and trusted well being our bodies just like the World Well being Group. A number of Democratic and progressive strategists had been rebuked after they tried to put Google advertisements criticizing the Trump administration’s response to coronavirus, officers inside the corporations instructed Protocol.

Google’s data centers use billions of gallons of water to keep processing units cool. Among the facilities are situated in dry areas which might be struggling to preserve their provides. (Nikitha Sattiraju / Bloomberg)

As presidential candidates pivot to campaigning almost entirely online, political tech startups are scrambling to keep up with demand. Enterprise is booming for corporations that permit candidates to simply textual content or name voters and donors. (Issie Lapowsky / Protocol)

Wisconsin faces a shortage of poll workers and a potential dip in voter turnout due to the due to the coronavirus pandemic, but the state is moving forward with its April 7th primary anyway. (Zach Montellaro / Politico)

Oracle founder Larry Ellison is helping President Trump build a database of COVID-19 cases. He’s additionally turning his Hawaiian island resort right into a well being and wellness laboratory powered by knowledge, no matter meaning! All of it guarantees to be an excellent Netflix sequence sometime. (Angel Au-Yeung / Forbes)

Facebook is stepping up its efforts to help with the US census. Fb and Instagram now have notifications reminding folks to finish the census, and the corporate can be working to fight misinformation in regards to the course of. (Fb)

Trade

YouTube is planning to launch a rival to TikTok called Shorts by the end of the year. The app will benefit from YouTube’s catalog of licensed music by permitting customers to decide on songs as soundtracks for his or her movies. Alex Heath and Jessica Toonkel at The Info have the story:

TikTok’s enterprise is small relative to that of YouTube, which had greater than $15 billion in promoting income final yr. ByteDance makes the overwhelming majority of its income in China—together with from its native TikTok equal, generally known as Douyin—and has used its monetary assets to aggressively promote TikTok within the U.S. and elsewhere. In a be aware to staff late final yr, ByteDance CEO Zhang Yiming urged them to “diversify TikTok’s development” and “improve funding in weaker markets,” according to Reuters.

The a part of the financial system devoted to creating novel Instagram backdrops is tanking as a result of coronavirus pandemic. Shade Manufacturing facility and Museum of Ice Cream each shut down for now, shedding most staff. (Ashley Carman / The Verge)

YTMND is again, practically a yr after being introduced down by a server failure. The location has modernized a bit, and now not wants Flash to view its archive of looping GIFs and synchronized music. (Jacob Kastrenakes / The Verge)

Jack Black joined TikTok. His first video exhibits him doing a dance he calls the “Quarantine Dance.” He’s, um, shirtless. And sporting cowboy boots. (Taylor Lyles / The Verge)

Animal Crossing’s social media explosion has left some fans feeling frustrated and jealous of other peoples’ elaborate designs. The sport has turn out to be a phenomenon on social media partially due to a brand new button that lets gamers simply share screenshots. (Patricia Hernandez / Polygon)

Issues to do

Stuff to occupy you on-line in the course of the quarantine.

Participate in the 2020 census! It takes about 10 minutes and helps direct billions of {dollars} in federal funding to native communities. (And should you gained’t hearken to me, perhaps you’ll listen to Sheryl Sandberg.)

Go to one of these virtual events with authors and illustrators creating content specifically for kids.

Watch Protocol’s Issie Lapowsky interview Rep. Ro Khanna, who represents Silicon Valley, in a Zoom meetup on Thursday at midday PT.

And at last…

Discuss to us

Ship us ideas, feedback, questions, and Zoom vulnerabilities: casey@theverge.com and zoe@theverge.com.

The Interface

How to think about polarization on Facebook

Published

on

By

I.

On Tuesday, the Wall Street Journal published a report about Facebook’s efforts to fight polarization since 2016, based on internal documents and interviews with current and former employees. Rich with detail, the report describes how Facebook researched ways to reduce the spread of divisive content on the platform, and in many cases set aside the recommendations of employees working on the problem. Here are Jeff Horwitz and Deepa Seetharaman:

“Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from a 2018 presentation. “If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.” […]

Fixing the polarization problem would be difficult, requiring Facebook to rethink some of its core products. Most notably, the project forced Facebook to consider how it prioritized “user engagement”—a metric involving time spent, likes, shares and comments that for years had been the lodestar of its system.

The first thing to say is that “polarization” can mean a lot of things, and that can make the discussion about Facebook’s contribution to the problem difficult. You can use it in a narrow sense to talk about the way that a news feed full of partisan sentiment could divide the country. But you could also use it as an umbrella term to talk about initiatives related to what Facebook and other social networks have lately taken to calling “platform integrity” — removing hate speech, for example, or labeling misinformation.

The second thing to say about “polarization” is that while it has a lot of negative effects, it’s worth thinking about what your proposed alternative to it would be. Is it national unity? One-party rule? Or just everyone being more polite to one another? The question gets at the challenge of “fighting” polarization if you’re a tech company CEO: even if you see it as an enemy, it’s not clear what metric you would rally your company around to deal with it.

Anyway, Facebook reacted to the Journal report with significant frustration. Guy Rosen, who oversees these efforts, published a blog post on Wednesday laying out some of the steps the company has taken since 2016 to fight “polarization” — here used in that umbrella-term sense of the word. The steps include shifting the News Feed to include more posts from friends and family than publishers; starting a fact-checking program; more rapidly detecting hate speech and other malicious content using machine-learning systems and an expanded content moderation workforce; and removing groups that violate Facebook policies from algorithmic recommendations.

Rosen writes:

We’ve taken a number of important steps to reduce the amount of content that could drive polarization on our platform, sometimes at the expense of revenues. This job won’t ever be complete because at the end of the day, online discourse is an extension of society and ours is highly polarized. But it is our job to reduce polarization’s impact on how people experience our products. We are committed to doing just that.

Among the reasons the company was frustrated with the story, according to an internal Workplace post I saw, is that Facebook had spent “several months” talking with the Journal reporters about their findings. The company gave them a variety of executives to speak with on and off the record, including Joel Kaplan, its vice president of global public policy, who often pops up in stories like this to complain that some action might disproportionately hurt conservatives.

In any case, there are two things I think are worth mentioning about this story and Facebook’s response to it. One is an internal tension in the way Facebook thinks about polarization. And the other is my worry that asking Facebook to solve for divisiveness could distract from the related but distinct issues around the viral promotion of conspiracies, misinformation, and hate speech.

First, that internal tension. On one hand, the initiatives Rosen describes to fight polarization are all real. Facebook has invested significantly in platform integrity over the past several years. And, as some Facebook employees told me yesterday, there are good reasons not to implement every suggestion a team brings you. Some might be less effective than other efforts that were implemented, for example, or they might have unintended negative consequences. Clearly some employees on the team feel like most of their ideas weren’t used, or were watered down, including employees I’ve spoken with myself over the years. But that’s true of a lot of teams at a lot of companies, and it doesn’t mean that all their efforts were for nought.

On the other hand, Facebook executives largely reject the idea that the platform is polarizing in the tearing-the-country-apart sense of the word. The C-suite read closely a working paper that my colleague Ezra Klein wrote about earlier this year that casts doubt on social networks’ contribution to the problem. The paper by Levi Boxell, Matthew Gentzkow, and Jesse Shapiro studies what is known as “affective polarization,” which Klein defines as “the difference between how warmly people view the political party they favor and the political party they oppose.” They found that affective polarization had increased faster in the United States than anywhere else — but that in several large, modernized nations with high internet usage, polarization was actually decreasing. Klein wrote:

One theory this lets us reject is that polarization is a byproduct of internet penetration or digital media usage. Internet usage has risen fastest in countries with falling polarization, and much of the run-up in US polarization predates digital media and is concentrated among older populations with more analogue news habits.

Klein, who published a book on the subject this year, believes that social networks contribute to polarization in other ways. But the fact that there are many large countries where Facebook usage is high and polarization is decreasing helps to explain why the issue is not top of mind for Facebook’s C-suite. As does Mark Zuckerberg’s own stated inclination against platforms making editorial judgments on speech. (Which he reiterated at a virtual shareholders’ meeting today.)

So here you have a case where Facebook can be “right” in a platform integrity sense — look at all these anti-polarization initiatives! — while the Journal is right in a larger one: Facebook has been designed as a place for open discussion, and human nature ensures that those discussions will often be heated and polarizing, and the company has chosen to take a relatively light touch in managing the debates. And it does so because executives think the world benefits from raucous, few-holds-barred discussions, and because they aren’t persuaded that those discussions are tearing countries apart.

Where Facebook can’t wriggle off the hook, I think, is in the Journal’s revelation of just how important its algorithmic choices have been in the spread of polarizing speech. Again, here the problem isn’t “polarization” in the abstract — but in concrete harms related to anti-science, conspiracy, and hate groups, which grow using Facebook’s tools. The company often suggests that its embrace of free speech has created a neutral platform, when in fact its design choices often reward division with greater distribution.

This is the part of the Journal’s report that I found most compelling:

The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”

Facebook says that extremist groups are no longer recommended. But just today, the disinformation researcher Nina Jankowicz joined an “alternative health” group on Facebook and immediately saw recommendations that she join other groups related to white supremacy, anti-vaccine activism, and QAnon.

Ultimately, despite its efforts so far, Facebook continues to unwittingly recruit followers for bad actors, who use it to spread hate speech and misinformation detrimental to the public health. The good news is that the company has teams working on those problems, and surely will develop new solutions over time. The question raised by the Journal is, when that happens, how closely their bosses will listen to them.

II.

On Tuesday, Twitter added a link to two of President Trump’s tweets , designating them as “potentially misleading.” It took this action because Trump, as part of a disinformation campaign alleging that voting by mail will trigger massive vote fraud, was appearing to interfere with the democratic process in violation of the company’s policies.

Trump was outraged about the links, and tweeted about being censored to his 80 million followers. He threatened to shut down social media companies. He said “big action” would follow. At the direction of a White House spokeswoman, right-wing trolls began to harass Yoel Roth, Twitter’s head of site integrity, who has previously tweeted criticism of Trump. Members of Congress including Marco Rubio and Josh Hawley tweeted that Twitter’s action could not stand, and that social platforms should lose Section 230 protections for moderating speech — willfully misunderstanding Section 230 in the way that they always do. Late in the day, there was word of a forthcoming executive order, with no other details.

I could spend a lot of time here speculating about the coming battle between social networks and the Republican establishment, with Silicon Valley’s struggling efforts to moderate their unwieldy platforms going head-to-head with Republicans’ bad-faith attempts to portray them as politically biased. But the past few years have taught us that while Congress is happy to kick and scream about the failures of tech platforms, it remains loath to actually regulate them.

It’s true that we have seen some apparent retaliation from Trump against social networks — the strange fair housing suit filed against Facebook last year comes to mind. And several antitrust cases are currently underway that could result in significant action. But for the most part, as Makena Kelly writes today in The Verge, the bluster is as far as it ever really goes:

The president has never followed through on his threats and used his considerable powers to place legal limits on how these companies operate. His fights with the tech companies last just long enough to generate headlines, but flame out before they can make a meaningful policy impact. And despite the wave of conservative anger currently raining down on Twitter, there’s no reason to think this one will be any different.

Those flameouts are most tangible in the courts. On the same day as Trump’s tweets, the US Court of Appeals in Washington ruled against the nonprofit group Freedom Watch and fringe right figure Laura Loomer in a case purporting that Facebook, Google, and Twitter conspired to suppress conservative content online, according to Bloomberg. Whether it be Loomer or Rep. Tulsi Gabbard (D-HI) fighting the bias battle, the courts have yet to rule in their favor.

In fact, as former Twitter spokesman Nu Wexler noted, Trump has even less leverage over Twitter than he does over other tech companies. “Twitter don’t sell political ads, they’re not big enough for an antitrust threat, and he’s clearly hooked on the platform,” Wexler tweeted. And whatever Trump may think, as the law professor Kate Klonick noted, “The First Amendment protects Twitter from Trump. The First Amendment doesn’t protect Trump from Twitter.”

Facts and logic aside, get ready: you’re about to hear a lot more cries from people complaining that they have been censored by Twitter. And it will be all over Twitter.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending sideways: YouTube began fixing an error in its moderation system that caused comments containing certain Chinese-language phrases critical of China’s Communist Party to be automatically deleted. The company still won’t explain what caused the deletions in the first place, though some are speculating that Chinese trolls trained the YouTube algorithm to block the terms. (James Vincent / The Verge)

Trending down: Harry Sentoso, a warehouse worker in Irvine who was part of Amazon’s COVID-19 hiring spree, died after two weeks on the job. Sentoso was presumed to have the novel coronavirus after his wife tested positive. (Sam Dean / Los Angeles Times)

Virus tracker

Total cases in the US: More than 1,701,500

Total deaths in the US: At least 100,000

Reported cases in California: 100,371

Total test results (positive and negative) in California: 1,696,396

Reported cases in New York: 369,801

Total test results (positive and negative) in New York: 1,774,128

Reported cases in New Jersey: 156,628

Total test results (positive and negative) in New Jersey: 635,892

Reported cases in Illinois: 114,448

Total test results (positive and negative) in Illinois: 786,794

Data from The New York Times. Test data from The COVID Tracking Project.

Governing

Whistleblowers say Facebook failed to warn investors about illegal activity happening on its platform. A complaint filed with the Securities and Exchange Commission late Tuesday includes dozens of pages of screenshots of opioids and other drugs for sale on Facebook and Instagram, reports Nitasha Tiku at The Washington Post:

The filing is part of a campaign by the National Whistleblower Center to hold Facebook accountable for unchecked criminal activity on its properties. By petitioning the SEC, the consortium is attempting to get around a bedrock law — Section 230 of the Communications and Decency Act — that exempts Internet companies from liability for the user-generated content on their platform.

Instead, the complaint focuses on federal securities law, arguing that Facebook’s failure to tell shareholders about the extent of illegal activity on its platform is a violation of its fiduciary duty. If Facebook alienates advertisers and has to shoulder the true cost of scrubbing criminals from its social networks, it could affect investors in the company, the complaint argues.

Facebook ran a multi-year charm offensive to develop friendly relationships with powerful state prosecutors who could use their investigative powers to harm the company’s revenue growth. In the end, the strategy had mixed results: Most of those attorneys general are now investigating the company for possible antitrust violations. I never cease to be amazed how ineffective tech lobbying is, given the money that gets spent on it (Naomi Nix / Bloomberg)

A federal appeals court rejected claims that Twitter, Facebook, Apple, and Google conspired to suppress conservative views online. The decision affirmed the dismissal of a lawsuit by the nonprofit group Freedom Watch and the right-wing YouTube personality Laura Loomer, who accused the companies of violating antitrust laws and the First Amendment in a coordinated political plot. (Erik Larson / Bloomberg)

The Arizona attorney general sued Google for allegedly tracking users’ locations without permission. The case appears to hinge on whether Android menus were too confusing for the average person to navigate. (Tony Romm / Washington Post)

India’s antitrust body is looking into allegations that Google abused its market position to unfairly promote its mobile payments app. The complaint alleges Google hurt competition by prominently displaying Google Pay inside the Android app store in India. (Aditya Kalra and Aditi Shah / Reuters)

Google sent 1,755 warnings to users whose accounts were targets of government-backed attackers last month. The company highlighted new activity from “hack-for-hire” firms, many based in India, that have been creating Gmail accounts spoofing the World Health Organization. (Google)

Switzerland is now piloting a COVID-19 contact tracing app that uses the AppleGoogle framework. The app, SwissCovid, is the first to put the Apple-Google model to use. (Christine Fisher / Engadget)

Silicon Valley’s billionaire Democrats are spending tens of millions of dollars to help Joe Biden catch up to President Trump’s lead on digital campaigning. These billionaires’ arsenals are funding everything from nerdy political science experiments to divisive partisan news sites to rivalrous attempts to overhaul the party’s beleaguered data file. (Theodore Schleifer / Recode)

A war has broken out on Reddit regarding how content is moderated. The feud started when a list of “PowerMods” began circulating, with the title “92 of top 500 subreddits are controlled by just 4 people.” (David Pierce / Protocol)

Twitter’s anti-porn filters blocked the name of Boris Johnson’s chief adviser, Dominic Cummings, from trending on the platform. Cummings has dominated British news for almost a week after coming under fire for traveling across the country during the coronavirus lockdown. It’s nice to read a truly funny story about content moderation for a change. (Alex Hern / The Guardian)

Industry

TikTok’s parent ByteDance generated more than $3 billion of net profit last year. The company’s revenue more than doubled from the year before, to $17 billion, propelled by high growth in user traffic. Here are Katie Roof and Zheping Huang at Bloomberg:

The company owes much of its success to TikTok, now the online repository of choice for lip-synching and dance videos by American teens. The ambitious company is also pushing aggressively into a plethora of new arenas from gaming and search to music. ByteDance could fetch a valuation of between $150 billion and $180 billion in an initial public offering, a premium relative to sales of as much as 20% to social media giant Tencent thanks to a larger global footprint and burgeoning games business, estimated Ke Yan, Singapore-based analyst with DZT Research.

Facebook’s experimental app division has a new product out today called Collab. The app lets users create short music videos using other people’s posts, which sounds a lot like TikTok. (Nick Statt / The Verge)

Facebook’s annual shareholder meeting was held virtually on Wednesday. One item on the agenda was a call for Mark Zuckerberg to relinquish his position as chair of Facebook’s board of directors, and be replaced by an independent figure. Somehow it failed! (Rob Price / Business Insider)

Instagram will start sharing revenue with creators for the first time, through ads in IGTV and badges that viewers can purchase on Instagram Live. The company has hinted that ads would come to IGTV for more than a year, often saying the long-form video offering would be the most likely place it’d first pay creators. Any time creators are develop a direct relationship with their audience and profit from it, I get super happy. (Ashley Carman / The Verge)

Google is rolling out a series of updates aimed at helping local businesses adapt to the COVID-19 pandemic. The company is expanding a product that allows businesses to sell gift cards during the government shutdown. It’s also allowing restaurants to point to their preferred delivery partners for customers that want to order through third-party apps. (Sarah Perez / TechCrunch)

About half of remote workers in the US report feeling less connected to their company, more stressed in ways that negatively impact their work, and say they are working more hours from home. The downsides could become prominent as more companies extend remote working deadlines beyond the coronavirus pandemic. (Kim Hart / Axios)

Things to do

Stuff to occupy you online during the quarantine.

Watch HBO Max. It’s here, and it’s totally diluting the HBO brand!

Turn your Fuji camera into a high-end webcam with this new software. It works over USB.

Subscribe to a new newsletter from Google walkout organizer Claire Stapleton. Tech Support promises to offer “existential advice for today’s tech worker.”

Replace your Zoom calls with a Sims-style virtual hangout. It’s a new twist on video chat from a company called Teeoh.

Call 1-775-HOT-VINE to hear audio clips of famous Vines. I just did and it was extremely charming.

Those good tweets …

Talk to us

Send us tips, comments, questions, and polarizing Facebook posts: casey@theverge.com and zoe@theverge.com.

Continue Reading

The Interface

Why Twitter labeling Trump’s tweets as “potentially misleading” is a big step forward

Published

on

By

From time to time a really bad post on a social network gets a lot of attention. Say a head of state falsely accuses a journalist of murder, or suggests that mail-in voting is illegal — those would be pretty bad posts, I think, and most people working inside and outside of the social network could probably agree on that. In my experience, though, average people and tech people tend to think very differently about what to do about a post like that. Today I want to talk about why.

When an average person sees a very bad post on a social network, they may call for it to be removed immediately. They will justify this removal on moral grounds — keeping the post up, they will say, is simply indecent. To leave it up would reflect poorly on the moral character of everyone who works at the company, especially its top executives. Some will say the executives should resign in disgrace, or possibly be arrested. Congress may begin writing letters, and new laws will be proposed, so that such a bad post never again appears on the internet.

When a tech company employee sees a really bad post, they are just as likely to be offended as the next person. And if they work on the company’s policy team, or as a moderator, they will look to the company’s terms of services. Has a rule been broken? Which one? Is it a clear-cut violation, or can the post be viewed multiple ways?

If a post is deeply offensive but not covered by an existing rule, the company may write a new one. As it does, employees will try to write the rule narrowly, so as to rule in the maximum amount of speech, while ruling out only the worst. They will try to articulate the rule clearly, so that it can be understood in every language by an army of low-paid moderators. (And who may be developing post-traumatic stress syndrome and related conditions.)

Put another way, when an average person sees a really bad post, their instinct is to react with anger. And when a tech person sees a really bad post, their instinct is to react practically.

All of that context feels necessary to understand two Twitter debates playing out today: one over what Twitter ought to do about the fact that President Trump keeps tweeting without evidence that one of the few high-profile Republicans who regularly speaks out about him, the onetime congressman and current MSNBC host Joe Scarborough, may be implicated in the 2001 death of a former staffer. And one over what to do about the president’s war on voting by absentee ballot.

As to the former: In fact, according to the medical examiner, former Scarborough aide Lori Klausutis died of a blood clot.) Now her widow is petitioning Twitter CEO Jack Dorsey to remove Trump’s tweets suggesting there may have been foul play. John Wagener wrote up the day’s events in the Washington Post:

With no evidence, Trump has continued to push a conspiracy theory that Scarborough, while a member of Congress, had an affair with his married staffer and that he may have killed her — a theory that has been debunked by news organizations including The Washington Post and that Timothy Klausutis called a “vicious lie” in his letter to Dorsey.

On Tuesday morning, Trump went on Twitter again to advocate the “opening of a Cold Case against Psycho Joe Scarborough,” which he said was “not a Donald Trump original thought.”

“So many unanswered & obvious questions, but I won’t bring them up now!” Trump added. “Law enforcement eventually will?

If you believe social networks are obligated to remove posts that are indecent, it’s clear why you would want these tweets to come down. The president is inflicting an emotional injury on an innocent, bereaved man for political gain. (Trump has historically benefitted from falsely suggesting his Republican opponents are murderers, as Jonathan Chait notes here.)

But if your job is to write or enforce policy at a tech company, your next steps are far less clear. Consider the facts. Did Trump say definitively that Scarborough committed murder? He didn’t — “maybe or maybe not,” he tweeted this morning. Did Trump incite violence against Scarborough, directly or indirectly? (Twitter has promised to hide such tweets behind a warning label, but it has never done so.) I don’t think so, and while encouraging law enforcement to investigate the case arguably represents an abuse of presidential power, our nation’s founders invested the responsibility for reining in a wayward chief executive not with private companies but with the other two branches of government.

Let’s make it more complex: Scarborough is a public figure — a former congressman, no less. Traditionally social networks have tolerated much more indecency when it comes to average people wanting to yell at the rich and powerful, and when it comes to the rich and powerful yelling at one another. And when two of those figures are engaged in political discourse — the kind of discourse that the First Amendment, which informs so many of the principles of tech company speech policies, sought to protect above all else — a tech policy person would probably want to give that speech the widest possible latitude.

I spent the day talking with former Twitter employees who worked on speech and policy issues. For the most part, they thought Trump’s Scarborough tweets should stay up. For one, the tweets don’t violate existing policy. And two, they believe you can’t design a policy that bans these tweets that doesn’t also massively chill speech across the platform. As one former employee put it to me, “If speculation about unproven crime is not allowed, I have bad news for anyone who wants to tweet about a true crime podcast.”

Now, it’s possible for me to imagine a time when Twitter would have to take action against these tweets. There was a time when Alex Jones’ tweets and videos about the Sandy Hook school shooting also fell into the realm of “speculating about true crime,” even though his conspiracy theories were almost certainly promoted in bad faith. But then Jones’ fans began stalking and harassing families of the murder victims, in some cases threatening to kill them. Eventually Jones was removed from most of the big social platforms.

If Trump continues to promote the lie about Scarborough, we can assume some of his followers will take matters into their own hands. It’s been barely a year since one of those followers was sentenced to 20 years in prison for mailing 16 pipe bombs to people he perceived to be Trump’s enemies. If something similar happens as a result of the Scarborough tweets, Twitter will face criticism for failing to act. It’s a terrible position for the company to be in.

But mostly it’s just a terrible thing for the president to do. And in a democracy we have remedies for bad behavior that go well beyond asking a tech company to de-platform a politician. You can speak your mind, you can march in the streets, and you can vote. That’s why, for most problems of political speech, my preferred solution is more speech, in the form of more votes.

Which brings us to the day’s surprising conclusion: Twitter’s decision to label, for the first time, some of Trump’s tweets as potentially misleading. Makena Kelly has the story in The Verge:

On Tuesday, Twitter labeled two tweets from President Donald Trump making false statements about mail-in voting as “potentially misleading.” It’s the first time the platform has fact-checked the president.

The label was imposed on two tweets Trump posted Tuesday morning falsely claiming that “mail-in ballots will be anything less than substantially fraudulent” and would result in “a rigged election.” The tweets focused primarily on California’s efforts to expand mail-in voting due to the novel coronavirus pandemic. On Sunday, the Republican National Committee sued California Gov. Gavin Newsom over the state’s moves to expand mail-in voting.

According to a Twitter spokesperson, the tweets “contain potentially misleading information about voting processes and have been labeled to provide additional context around mail-in ballots.” When a user sees the tweets from Trump, a link from Twitter is attached to them that says “Get the facts about mail-in ballots.” The link leads to a collection of tweets and news articles debunking the president’s statements.

This story is surprising for several reasons. It involves Twitter, a company notoriously prone to inaction, making a decisive move against its most powerful individual user. It ensures a long stretch of partisan mud-wrangling over which future tweets from which other politicians deserve similar treatment — and over whether one side or another is being punished disproportionately. And it puts Twitter prominently in the position it has long sought to avoid — “the arbiter of truth,” chiming in when the president lies to say that no, actually, it’s legal to vote by absentee ballot.

And yet at the same time, Twitter’s decision was rooted in principle. In January Twitter began allowing users to flag tweets that contain misleading information about how to vote. Today it applied that policy, fairly and with relative precision. Some have criticized the design and wording of the actual label — “Get the facts about mail-in ballots” doesn’t exactly scream “the president is lying about this.” But it still feels like a step forward, and not a small one.

Social networks that reach global scale will always suffer from really bad posts, some of them posted by their most prominent users. And it’s precisely because those platforms have become so important to political speech that I would rather decisions about what stays up and what comes down not be dictated by the whims of unelected, unaccountable founders.

Twitter’s decision to leave up some of Trump’s awful tweets and label others as misleading won’t fully satisfy anyone. But in my view this is a case where the company has made some hard decisions in a relatively judicious way. And anyone who tries to write a better, more consistent policy — one that goes beyond “this is indecent, take it down” — will find that it’s much harder than it looks.

The Ratio

Today in news that could affect public perception of the big tech platforms.

⬆️Trending up: Facebook announced new features for Messenger that will alert users about messages that appear to come from financial scammers or child abusers. The company said the detection will occur only based on metadata—not analysis of the content of messages—so that it doesn’t undermine end-to-end encryption. (Andy Greenberg / Wired)

⬇️Trending down: YouTube deleted comments with two phrases that insult the Chinese Communist party. The company said it was an error. (James Vincent / The Verge)

⬇️Trending down: Amazon supplied local TV news stations with a propaganda reel intended to change the subject from deaths and illnesses at its distribution centers. At least 11 stations aired it, and this video lets you watch various news anchors robotically parrot the PR talking points. (Nick Statt / The Verge)

Virus tracker

Total cases in the US: More than 1,685,800

Total deaths in the US: At least 98,800

Reported cases in California: 99,547

Total test results (positive and negative) in California: 1,696,396

Reported cases in New York: 368,669

Total test results (positive and negative) in New York: 1,774,128

Reported cases in New Jersey: 155,764

Total test results (positive and negative) in New Jersey: 635,892

Reported cases in Illinois: 113,402

Total test results (positive and negative) in Illinois: 786,794

Data from The New York Times. Test data from The COVID Tracking Project.

Governing

Facebook spent years studying how the platform polarized people, according to sources and internal documents. One slide from a 2018 presentation read ”our algorithms exploit the human brain’s attraction to divisiveness.” Here are Jeff Horwitz and Deepa Seetharaman from the Wall Street Journal:

Facebook had kicked off an internal effort to understand how its platform shaped user behavior and how the company might address potential harms. Chief Executive Mark Zuckerberg had in public and private expressed concern about “sensationalism and polarization.”

But in the end, Facebook’s interest was fleeting. Mr. Zuckerberg and other senior executives largely shelved the basic research, according to previously unreported internal documents and people familiar with the effort, and weakened or blocked efforts to apply its conclusions to Facebook products.

Facebook policy chief Joel Kaplan, who played a central role in vetting proposed changes, argued at the time that efforts to make conversations on the platform more civil were “paternalistic,” said people familiar with his comments.

President Trump is considering creating a panel to review complaints of anticonservative bias on social media. Facebook, Twitter, and Google all pushed back against the proposed panel, denying any anticonservative bias. I imagine today’s action from Twitter will come up, if this thing turns out to be real. (John D. McKinnon and Alex Leary / The Wall Street Journal)

Doctors with verified accounts on Facebook are spreading coronavirus misinformation. The company has been trying to crack down on misinformation about virus, but the accounts are still able to reach hundreds of thousands of people regularly. (Rob Price / Business Insider)

Here’s a guide to the most notorious spin doctors and conspiracy theorists spreading misinformation about the coronavirus pandemic. (Jane Lytvynenko, Ryan Broderick and Craig Silverman / BuzzFeed)

Influencers say Instagram is biased against plus-sized bodies — and they might be right. Content moderation on social media is usually a mix of artificial intelligence and human moderators, and both methods have a potential bias against larger bodies. (Lauren Strapagiel / BuzzFeed)

Joe Biden’s digital team is trying to raise his online profile prior to the 2020 election while understanding his limitations on social media. Which is another way of saying he’s still not on TikTok. (Sam Stein / Daily Beast)

Democrats are introducing a new bill that would tighten restrictions on online political ad-targeting on platforms like Facebook. The Protecting Democracy from Disinformation Act would limit political advertisers to targeting users based only on age, gender and location — a move intended to crack down on microtargeting. (Cristiano Lima / Politico)

Two new laws in Puerto Rico make it a crime to report information about emergencies that the government considers “fake news.” The ACLU filed a lawsuit on behalf of two Puerto Rican journalists who fear the laws will be used to punish them for their reporting on the coronavirus crisis. (Sara Fischer / Axios)

One of the first contact-tracing apps in the US, North and South Dakota’s Care19, violates its own privacy policy by sharing location data with an outside company. The oversight suggests that state officials and Apple, both of which were responsible for vetting the app before it became available April 7th, were asleep at the wheel. (Geoffrey A. Fowler / The Washington Post)

China’s virus-tracking apps have been collecting information, including location data, on people in hundreds of cities across the country. But the authorities have set few limits on how that data can be used. And now, officials in some places are loading their apps with new features, hoping the software will live on as more than just an emergency measure. (Raymond Zhong / The New York Times)

Serious security vulnerabilities were discovered in Qatar’s mandatory contact tracing app. The security flaw, which has now been fixed, would have allowed bad actors to access highly sensitive personal information, including the name, national ID, health status and location data of more than one million users. (Amnesty International)

Inside the NSA’s secret tool for mapping your social network. Edward Snowden revealed the agency’s phone-record tracking program. But the database was much more powerful than anyone knew. (Barton Gellman / Wired)

Silicon Valley’s main data-protection watchdog in Europe came under attack for taking too long to wrap up probes into Facebook, Instagram and WhatsApp. The group has yet to issue any significant fines two years after the EU empowered it to levy hefty penalties for privacy violations. (Stephanie Bodoni / Bloomberg)

A court in the Netherlands is forcing a grandmother to delete photos of her grandkids that she posted on Facebook and Pinterest without their parents’ permission. The judge ruled the matter was within the scope of the EU’s General Data Protection Regulation. (BBC)

Industry

⭐Shopping for Instacart is dangerous during the pandemic. Now, workers who’ve gotten sick say they haven’t been able to get the quarantine pay they were promised. Russell Brandom at The Verge has the story:

It’s a common story. On forums and in Facebook groups, Instacart’s sick pay has become a kind of sour joke. There are lots of posts asking how to apply, but no one seems to think they’ll actually get the money. The Verge spoke to eight different workers who were placed under quarantine — each one falling prey to a different technicality. A worker based in Buffalo was quarantined by doctors in March but didn’t qualify for an official test, leaving him with no verification to send to reps. In western Illinois, a man received a quarantine order from the state health department, but without a test, he couldn’t break through. Others simply fell through the cracks, too discouraged to fight the claim for the weeks it would likely take to break through.

Amazon lost some online shoppers to rivals during the pandemic as it struggled to keep up with demand. Now the retail giant is turning back to faster shipping times and big sales to lure people back to the platform. (Karen Weise / The New York Times)

Google said the majority of its employees will work from home through 2020. It’s giving everyone $1,000 to cover any new work-from-home expenses. (Chaim Gartenberg / The Verge)

Welcome to the age of the TikTok cult. These aren’t the ideological cults most people are familiar with. Instead, they are open fandoms revolving around a single creator. Right now they’re being weaponized to perform social-media pranks, but it feels like something much darker is around the corner. (Taylor Lorenz / The New York Times)

Zoom temporarily removed Giphy from its chat feature, days after Facebook acquired the GIF platform for $300 million. “Once additional technical and security measures have been deployed, we will re-enable the feature” the company said.

Facebook renamed Calibra, the digital wallet it hopes will one day be used to access the Libra digital currencies, to “Novi.” The company said that the new name was inspired by the Latin words “novus” and “via,” which mean “new” and “way” — and not, as I had assumed, the English words “non” and “viable.” (Jon Porter / The Verge)

Facebook’s internal R&D group launched a new app called CatchUp that makes it easier for friends and family in the US to coordinate phone calls with up to 8 people. I do not get this one at all. (Sarah Perez / TechCrunch)

Coronavirus may have saved Facebook from its fate as a chatroom for old people, this piece argues. There are early signs that young people are returning to the service. (Jael Goldfine / Paper)

Facebook’s Menlo Park headquarters have shaped the city. So too would an exodus of employees now that the company is shifting to remote work. (Sarah Emerson / OneZero)

Things to do

Stuff to occupy you online during the quarantine.

Listen to Boom / Bust: The Rise and Fall of HQ Trivia. It’s a fun new podcast from The Ringer about the company’s dramatic history; I appear on episode two.

Watch all of Fraggle Rock on Apple TV+. One of my favorite childhood shows finally has a streaming home.

Check out the launch lineup for HBO Max, which premieres Wednesday. If you already subscribe to HBO Now, as I do, you’re about go get a lot more movies and TV shows for the price.

Subscribe to Alex Kantrowitz’s new newsletter about big tech. One of my favorite reporters, Alex announced today he’s leaving BuzzFeed to go independent. You can sign up to get his new project via email here.

And finally…

Talk to us

Send us tips, comments, questions, and YouTube comments critical of the Chinese Communist party: casey@theverge.com and zoe@theverge.com.

Continue Reading

The Interface

Apple and Google’s COVID-19 notification system won’t work in a vaccum

Published

on

By

Last month, before Google and Apple announced their joint effort to enable COVID-19 exposure notifications, I wrote about the trouble with using Bluetooth-based solutions for contact tracing. Chief among the issues is getting a meaningful number of people to download any app in the first place, public health officials told me. And now that such apps are being released in the United States, we’re seeing just how big a challenge that is.

Here’s Caroline Haskins writing Tuesday in BuzzFeed:

Utah Gov. Gary Herbert said on April 22 that the app, Healthy Together, would be an integral part of getting the state back on its feet: “This app will give public health workers information they need to understand and contain the pandemic and help Utahns get back to daily life.”

The state spent $2.75 million to purchase the app and is paying a monthly maintenance fee of $300,000, according to contracts obtained by BuzzFeed News. But as of May 18, just 45,000 of the state’s 3.2 million people had downloaded Healthy Together, according to Twenty.

That’s roughly 1.4 percent adoption, well below the 60 percent or so that public health officials say is necessary to make such exposure notifications effective. And it bodes ill for other states’ efforts to distribute their own apps, particularly in a world where the federal response continues to be confused and even counterproductive.

But a new reason for hope arrived today, in the form of an official release of the Apple/Google exposure notification protocol. The system, which allows official public health apps to use system-level Bluetooth features to help identify potential new cases of COVID-19, is now available as an update to iOS and Android. Three states are working on projects so far, Russell Brandom reported today at The Verge:

Alabama is developing an app in connection with a team from the University of Alabama, while the Medical University of South Carolina is heading up a similar project in collaboration with the state’s health agency.

Most notably, North Dakota is planning to incorporate the system into its Care19 app, which drew significant criticism from users in its early versions.

“As we respond to this unprecedented public health emergency, we invite other states to join us in leveraging smartphone technologies to strengthen existing contact tracing efforts,” North Dakota Gov. Doug Burgum said in a statement, “which are critical to getting communities and economies back up and running.”

In a call with reporters today, Apple and Google said 22 countries have received API access to date. Later this year, an update to iOS and Android will allow people to begin participating in the program even if they haven’t yet downloaded an official public health app.

But as we’ve discussed here before, the best-designed tech interventions won’t be effective if they’re not supported by contact tracing and isolation of new cases. So let’s check in to see how we’re doing on those fronts.

“Contact tracing” was the name originally given to the Apple/Google initiative, before the companies acknowledged that what they were doing didn’t quite live up to that standard. The term refers to getting in touch with people who may have been exposed to a disease and directing them to testing and other resources, and the current consensus view is that this work is best done by human beings. The Apple/Google system, which has been rebranded as “exposure notification,” is intended to augment the work of human contact tracers.

Around the United States and the world, public health departments are hiring people as contact tracers. Officials estimate that we will need at least 100,000 such workers in the United States, and by April 11,000 had already been hired. California and Massachusetts began hiring early; Illinois, Georgia, and Texas are among the states that have followed. There’s clearly much more work to be done, and quickly, but here’s a case where federal inaction hasn’t totally stopped states from developing a response. (More federal money to hire contact tracers would help, though.)

And what about isolating new cases? This is the process whereby people infected with COVID-19 temporarily relocate to government-run facilities to receive care in an environment where they are unlikely to spread the disease to others. Israel and Denmark are among the countries that have been using such facilities. San Francisco, in a possible step toward a contact isolation program, has begun paying to house people who were homeless in hotels as a measure to reduce the spread of the disease.

Lyman Stone makes the case for rapid deployment of contact isolation in the Washington Post. He imagines a world in which people receive tickets for failing to comply with orders to isolate — but also one in which the facilities themselves are nice enough to get people to go along with the idea voluntarily:

This system also encourages compliance because the centralized facilities would provide isolated individuals with all their basic needs (plus daily supervision so they would get treatment if they become sick). Food and medication can be delivered, WiFi would be free, and governments should provide financial compensation for lost work time. And, since covid-19 is much less dangerous to kids, families could choose for their children to be quarantined with them or separately, whichever they prefer. All of this would require legislation by state governments, but none of it is infeasible.

Alas, contact isolation sounds scary to many people. It conjures images of internment, stigmatization or family separation. But the truth is that the curtailment of our liberties would be minuscule compared with the society-wide lockdowns Americans have been enduring.

At a time when all of us are looking for answers to the pandemic, an approach that combines testing, tracing, and isolation appears to be as close to a sure thing that we have, short of a vaccine. Caroline Chen looked at the research Tuesday in ProPublica:

Researchers in the U.K. used a model to simulate the effects of various mitigation and containment strategies. The researchers estimated that isolating symptomatic cases would reduce transmission by 32%. But combining isolation with manual contact tracing of all contacts reduced transmission by 61%. If contact tracing only could track down acquaintances, but not all contacts, transmission was still reduced by 57%.

A second study, which used a model based on the Boston metropolitan area, found that so long as 50% of symptomatic infections were identified and 40% of their contacts were traced, the ensuing reduction in transmission would be sufficient to allow the reopening of the economy without overloading the health care system. The researchers picked Boston because of the quality of available data, according to senior author Yamir Moreno, a professor at the institute for biocomputation and physics of complex systems at the University of Zaragoza in Spain. “For other locations, these percentages will change, however, the fact that the best intervention is testing, contact tracing and quarantining remains,” he said.

The Apple/Google collaboration represents a chance to use the companies’ vast size and power to make a positive contribution to public health during a crisis. But it will only ever be one piece of the puzzle — and not necessarily one of the larger pieces, either. The good news is that we increasingly understand how COVID-19 can be brought under control. The open question is whether the United States government, to which we have entrusted the job of keeping us all safe, will do what is necessary to make it happen.

Virus tracker

Total cases in the US: More than 1,547,300

Total deaths in the US: At least 92,600

Reported cases in California: 84,449

Total test results (positive and negative) in California: 1,339,316

Reported cases in New York: 359,235

Total test results (positive and negative) in New York: 1,467,739

Reported cases in New Jersey: 150,399

Total test results (positive and negative) in New Jersey: 520,182

Reported cases in Illinois: 98,300

Total test results (positive and negative) in Illinois: 621,684

Data from The New York Times. Test data from The COVID Tracking Project.

Governing

Twitter won’t add a “misleading” label to an article shared by Trump’s campaign manager, Brad Parscale, that claims hydroxychloroquine has a “90 percent chance of helping” COVID-19 patients. Even though the claim is misleading, Twitter says it won’t add a label because the link contains no direct call for action. Here’s Adi Robertson at The Verge:

The incident is an early test of Twitter’s expanding fight against misleading health information. This month, Twitter started labeling tweets that made false or disputed claims about the novel coronavirus, in addition to removing misinformation that could incite harm. A company spokesperson, however, said the tweet is “currently not in violation of the Twitter Rules and does not qualify for labeling.” Twitter says it’s prioritizing tweets that contain a potentially harmful call to action; it’s singled out messages that encouraged people to damage 5G cell towers, for instance. It says it won’t step in to label all tweets that contain unverified or disputed information about the coronavirus.

So far, Facebook also hasn’t made a call on whether the story violates its anti-misinformation rules. But a Facebook spokesperson told The Verge that the article would likely be eligible for fact-checking. The platform typically flags content that’s rated entirely or partially false, warning users and reducing its reach.

China has launched a Twitter offensive in the COVID-19 information war. Twitter output from China’s official sites has almost doubled since January, and the number of diplomatic Twitter accounts has tripled. In recent days, these accounts have been spreading a conspiracy theory that the virus came from a government lab in the US. (Anna Schecter / NBC)

Here’s how “Plandemic” went from a niche conspiracy video about COVID-19 to a mainstream phenomenon. This account includes a blow-by-blow look at who shared what, and when. (Sheera Frenkel, Ben Decker and Davey Alba / The New York Times)

The Israeli surveillance firm NSO Group created a web domain that looked as if it belonged to Facebook to entice targets to click on links that would install the company’s powerful phone hacking technology. Facebook is already suing the surveillance firm for leveraging a vulnerability in WhatsApp to let NSO clients remotely hack phones. (Joseph Cox / Vice)

Facebook hired Aparna Patrie, a Senate Judiciary attorney, to its public policy team amid ongoing antitrust scrutiny. Patrie served as committee counsel under Sen. Richard Blumenthal. (Keturah Hetrick / LegiStorm)

Google signed a deal with the US Department of Defense to build cloud technology designed to detect and respond to cyberthreats. The news comes two years after workers at the search giant protested Google’s contract with the Pentagon for Project Maven, an initiative that used AI to improve analysis of drone footage. (Richard Nieva / CNET)

A judge in Singapore sentenced a man to death via a Zoom call for his role in a drug deal. It’s one of just two known cases where a capital punishment verdict has been delivered remotely. (John Geddie / Reuters)

The rollout of Twitch’s Safety Advisory Council has been a disaster. this piece argues. The group is supposed to advice on issues of safety and harassment, and one of the council members has already become the target of harassment since joining. (Nathan Grayson / Kotaku)

Industry

ByteDance’s valuation has risen to more than $100 billion in recent private share transactions. The news reflects expectations that TikTok’s parent company will keep pulling in new advertisers. Here’s Bloomberg’s Lulu Yilun Chen, Vinicy Chan, Katie Roof, and Zheping Huang:

“The trading of ByteDance is reflective of the global wave of consumers who agree that ByteDance can displace Facebook as the leading social network,” said Andrea Walne, a partner at Manhattan Venture Partners who follows the secondary markets. […]

ByteDance has grown into a potent online force propelled in part by a TikTok short video platform that’s taken U.S. teenagers by storm. Investors are keen to grab a slice of a company that draws some 1.5 billion monthly active users to a family of apps that includes Douyin, TikTok’s Chinese twin, as well as news service Toutiao. That’s despite American lawmakers raising privacy and censorship concerns about its operation. This week, it poached Walt Disney Co. streaming czar Kevin Mayer to become chief executive officer of TikTok.

Twitter is testing a way to let you limit how many people can reply to your tweets. If you’re part of the test, when you compose a tweet, you’ll be able to select if you’ll allow replies from everyone, people you follow, or only people you @ mention. There are a lot of interesting implications here with regard harassment and abuse — and also free expression. Jay Peters at The Verge has the story:

Limiting who can reply to your tweets could help prevent abuse and harassment on the platform. By keeping replies to a limited set of people, in theory, you could have more thoughtful and focused conversations with people of your choosing without the risk of trolls jumping into the conversation.

Facebook’s new AI tool will automatically identify items people put up for sale. The company’s “universal product recognition model” uses artificial intelligence to identify consumer goods, from furniture to fast fashion to fast cars. (James Vincent / The Verge)

Deutsche Bank analysts say Facebook’s push into online shopping could generate a $30 billion jump in annual revenue. The company will make money off transaction fees, as well as a possible increase in advertising dollars. (Rob Price / Business Insider)

Mark Zuckerberg went on CBS to discuss Shops. The interview also gets into Facebook’s responsibility to manage misinformation on the platform.

Facebook will limit offices to 25 percent occupancy, put people on shifts and require temperature checks when it lets employees back into workplaces in July. Staff will also have to wear masks in the office when not social distancing. (Mark Gurman and Kurt Wagner / Bloomberg)

Video chat tools like Meet, Zoom, BlueJeans serve as meeting emulators. They attempt to copy and repeat the form of the meeting, but don’t capture the actual interactions, this writer argues. True! (Paul Ford / Wired)

Zoom suspended its free service to people in China. As of May 1st, individual free users can no longer host meetings on Zoom, but will still be able to join them. (Yifan Yu / Nikkei)

YouTube added bedtime reminders to help people log off late at night. The feature is part of a broader set of YouTube wellness and screen time tools released in 2018 as part of Google’s Digital Wellbeing initiative. A charming throwback to the days when we cared about screen time. (Nick Statt / The Verge)

The secure messaging app Signal added PINs, a new feature to help people move their profiles across devices. The move is intended to make the company less reliant on phone numbers as its users’ primary identification. (Bijan Stephen / The Verge)

People are hiding their social distance lapses from social media, a reversal of the typical use of Instagram where people once bragged about their social activities. All the secret quarantine relationships happening right now will make for a great Netflix series in 2025. (Kaitlyn Tiffany / The Atlantic)

Students are failing AP tests because the College Board testing portal doesn’t support the default photo format on iPhones. Students now have to spend weeks studying before retaking the test. Interfaces are important! Someone should start a newsletter about them! (Monica Chin / The Verge)

Things to do

Stuff to occupy you online during the quarantine.

Update your phone. You’ll need to have the latest version of iOS or Android to begin participating in exposure notification.

Check out beloved satirical website Clickhole, which returned on Wednesday under its new ownership.

Play Crucible, the first big video game developed by Amazon. The Verge’s Nick Statt found the shooter derivative but uniquely enjoyable.

And finally…

The joke in the above tweet is that Twitter disabled replies using the new audience-limiting features it unveiled today. Related joke:

Talk to us

Send us tips, comments, questions, and exposure notifications: casey@theverge.com and zoe@theverge.com.

Continue Reading

Trending