Novel Threats series web banner

Novel Threats is a series of brief conversations with fellows and affiliates of the Reiss Center on Law and Security exploring the intersection of the coronavirus pandemic and key national security challenges.

Randy Milch on Coronavirus, Contact Tracing and the Role of the Private Sector

October 14, 2020


Randy Milch is the Co-Chair of the NYU Center for Cybersecurity, a Distinguished Fellow at the Reiss Center on Law and Security, and a Professor of Practice at NYU School of Law. He was most recently executive vice president and strategic policy adviser to Verizon’s chairman and CEO. He served as the company’s general counsel from 2008 to 2014, and before that was general counsel of several business divisions within Verizon. At Verizon, Milch chaired the Verizon Executive Security Council, which was responsible for information security across all Verizon entities. Milch was responsible for national security matters at Verizon beginning in 2006, and has served as the senior cleared executive at Verizon. Earlier in his career, Milch was a partner in the Washington, DC office of Donovan Leisure Newton & Irvine. Full bio


Contact tracing is often identified as a crucial tool in defeating or keeping the coronavirus at bay, primarily enabled by technology. How does this work and what are the major concerns that privacy and civil liberties advocates have around such technology? Do you think we’ll see big tech companies play a large role in developing solutions for contact tracing?

The foundational step in contact tracing is finding those with whom an infected person has been in close contact. With the advent of the coronavirus pandemic, health authorities around the world turned to leverage the widespread use of smartphones in their contact tracing efforts. A few nations quickly deployed smartphone applications, but in April Apple and Google announced they were collaborating on an Exposure Notification System (ENS). In the United States this had the effect of shutting down serious consideration of any alternatives and as of October 1, eleven states are now—7 months into the pandemic—rolling out ENS-based contact tracing applications.

ENS is a framework that allows health authorities to deploy a customized and highly privacy-protective contact tracing application. Rather than using smartphone-generated data first to determine the geographic locations of the infected and then to determine which other smartphones have been in the same locations, the ENS-based app uses Bluetooth technology to sense when app-enabled smartphones are within a specified distance of one another and if so, to exchange secure random codes. If the contact lasts more than a specified time, the app adds the exchanged codes to a list of close contacts stored on both phones. The relevant health authority sets both the contact distance and contact length parameters. 

When an app user who tests positive for COVID-19 is contacted by health department personnel, they will be asked if they are willing to share their app’s list of “close contact” codes. The aggregate list of confirmed case codes is sent daily to app users and their phones will display a COVID alert if there is a match to any code on the user’s close contact list. A person receiving a COVID alert is not provided any information about the infected person they were near, and the relevant department of health cannot identify the individuals on any “close contact” list. 

“ENS’s decentralized structure has won plaudits from privacy advocates and appears intrinsically safe from a cybersecurity perspective. The efficacy of the approach, however, is uncertain.”

ENS’s decentralized structure has won plaudits from privacy advocates and appears intrinsically safe from a cybersecurity perspective. The efficacy of the approach, however, is uncertain. One study suggests that 60% of the population must use a tracing app for it to be fully effective in suppressing COVID-19 (although lower usage may also have a salutary effect). Even assuming a sliding scale of effectiveness, there is good reason to be concerned that we will be stuck at the lower end.

Smartphone penetration in the United States is about 80%, so app usage would have to be about 75% just to meet the 60% goal. More concerning is that smartphone penetration varies significantly by age, and inversely to the current COVID-19 mortality statistics. Only 53% of the U.S. population 65 and over owns a smartphone, versus 96% of those ages 18-29. Yet the CDC statistics as of September 30, 2020 show that only .9% of COVID-associated fatalities occurred in the wider 15 to 34 age group, while 78.8% of these fatalities befell those over 65. This suggests that even with widespread adoption, those most at risk will go unrecognized and unwarned by any decentralized app.

There are also reasons to be concerned with app penetration even among those owning a smartphone. Use of the app is voluntary in the United States, and the evidence shows limited usage in small, more socially-cohesive countries (like Singapore (25%) and Iceland (38%)) that deployed voluntary apps earlier in the pandemic. Even if downloaded, the user must affirmatively enable both the closeness sensing and the alert notification functions. Finally, an app user who tests positive for COVID-19 must decide to upload the “close contacts” kept by their smartphone. Inattention, inability and lingering concern among about a third of the U.S. population over Apple’s and Google’s trustworthiness will introduce leakage at each of these steps. 

“If states lacked the heft or bandwidth to set the privacy attributes of a COVID app, should the federal government have stepped into the void?”

There are a range of other smartphone approaches, each involving a rough trade-off between privacy and efficacy. According to the MIT Technology Review Covid Tracing Tracker, about a third of the nations with apps appear to have opted for a “centralized” approach that provides health care authorities with access to the identities of those in “close contact” with an infected person. Health departments regard this as an important part of suppressing contagion; privacy advocates worry that the information will be misused. About a quarter of the nations with apps employ smartphone location information, but this raises fears that location tracking—which could be accomplished without consent and on a mass scale—will be used for more nefarious purposes. Neither of these approaches can be implemented on the ENS platform.

The trade-offs between an executive function like contact tracing (or national security or crime control) and privacy normally are worked out in a lengthy and iterative process involving repeated court review of executive action and eventual legislative attention. But while state governments are deciding whether to use a smartphone-based contact tracing app, the underlying balance between health surveillance and privacy was effectively made by Apple and Google. 

Perhaps ceding this decision to Big Tech is the inevitable consequence of state control over contact tracing, control which even in the face of the COVID pandemic state health authorities jealously guarded. If states lacked the heft or bandwidth to set the privacy attributes of a COVID app, should the federal government have stepped into the void? The CDC says its primary role in contact tracing is to provide “guidance and support to help [state] health departments launch effective contact tracing programs.What would have happened if the CDC had set the balance between surveillance and privacy and sought the creation of an app somewhat less protective of privacy than the ENS? Would Apple and Google have acquiesced and produced an app like the centralized Australian, French, and New Zealand apps? Or would they have balked, insisting that their vision of privacy superseded the CDC’s?

If we are lucky, a strong post-pandemic study of the various contact tracing apps will provide us with a good view on which apps were most effective at reducing the spread of COVID-19. Perhaps then we can have an effective app at the ready, and one that does not merely reflect Big Tech’s view of the balance between public health and privacy.

You worked extensively on cybersecurity issues in the private sector. When it comes to COVID-19 and addressing some of the cyber vulnerabilities and concerns around the pandemic, what do you think the role of the private sector should be?

As my colleague Judi Germano noted in this series over the summer, the widespread shuttering of physical work locations beginning early this spring prompted a dramatic shift to remote work across the country. Businesses that could be operated remotely faced the immediate challenges of sharply expanding their remote work infrastructures and of “reimagining” the business functions that could be handled remotely. From a cybersecurity perspective, work suddenly shifted outside the defensive perimeters that corporate security teams had created, increasing their vulnerability to attack. 

The opportunity was not lost on the bad guys. For instance, one study early in the crisis found that ransomware attacks in March 2020 increased 148% over baseline levels in February and that attacks were correlated to the release of notable COVID news, suggesting that a dispersed workforce might be more susceptible to attacks in an atmosphere of fear and uncertainty. In a survey this summer of 1,000 Chief Information Security Officers, 91% reported a sharp increase in the number of cyberattacks they suffered as a result of a vastly greater number of employees working from home.  

“In the face of the extraordinary stressors of more risk and fewer resources, the existing cyber-regulatory structure does little to provide firms with good information about how much they must invest in cybersecurity to meet their obligations.”

Pandemic or no pandemic, however, the business response to cybersecurity threats is a product of the interplay between the availability of resources and the regulatory obligations the business faces. Let’s take these in order. Cybersecurity experts have long been concerned that corporate cyber defense is underfunded. When asked, security professionals in firms large and small report that their budgets are too small and that they lack critical personnel. And while the remote work response to the pandemic increased the corporate attack surface, for most firms the pandemic also diminished revenues. Earnings per share in the S&P 500 declined 31.4% in the second quarter, and the forecast for this quarter is for a further 20.5% decline. Firms facing decreasing revenues look to cut costs, and cybersecurity is very much a cost center. Gartner, for instance, sharply cut its forecast for 2020 security spending from a robust 8.7% increase to a more modest 2.4% increase. As you would expect, resources are even more constrained in smaller firms. In one survey, 22% of small business owners said they jumped to remote work “without a clear policy to mitigate or prevent cybersecurity threats/attacks.”

In the face of the extraordinary stressors of more risk and fewer resources, the existing cyber-regulatory structure does little to provide firms with good information about how much they must invest in cybersecurity to meet their obligations. Federal cybersecurity regulation is charitably described as a “patchwork” of sectoral regulators imposing cybersecurity obligations within their regulatory ambit and the FTC regulating the great run of domestic businesses by investigating “unfair” cybersecurity practices. The generally more prescriptive standards imposed by the sectoral regulators vary widely, while the FTC’s requirement is that companies provide “reasonable” cybersecurity. Determining whether a company is living up to the relevant cybersecurity standard usually happens only after a publicly reported cyber incident. And the post-incident fine or settlement—frequently shared with state Attorneys General—is essentially arbitrary.   

“The good news is that with time and experience, remote work is being hardened against cyberthreats. The bad news is that these gains will be unevenly distributed across the business landscape and that regulators will be largely ineffective at encouraging good cyber behavior.”

Moreover, regulatory reaction to the pandemic varied widely. Some regulators (e.g., the SEC, FINRA, HHS and the FTC) took the opportunity to remind the regulated community of their obligations and that remote work should be safeguarded. These efforts were reinforced in special publications by NIST and CISA—which have no enforcement capabilities—providing guidelines and resources to protect remote work. Some (e.g., the CFTC) relaxed existing rules to accommodate remote work, others (e.g. HHS’s Office of Civil Rights, which enforces HIPAA’s cybersecurity rules) decided to “not impose penalties for noncompliance with the regulatory requirements under the HIPAA Rules . . . in connection with the good faith provision of telehealth during the COVID-19 nationwide public health emergency.”

In the end, the pandemic will not change the overall cybersecurity dynamic. The private sector remains responsible for protecting the networks and data it controls, regardless of the resources available. Ex ante regulatory requirements remain all over the lot and the usual regulatory response to a breach—roaming the battlefield to dispatch the wounded—will likely return after a short and selective hiatus. The good news is that with time and experience, remote work is being hardened against cyberthreats. The bad news is that these gains will be unevenly distributed across the business landscape and that regulators will be largely ineffective at encouraging good cyber behavior.


Novel Threats: National Security and the Coronavirus Pandemic
« Previous PostView Full SeriesNext Post »