PROTECT-IP, SOPA, and the real threat to national security

Posted on December 8, 2011 - 15:43 by imeister

A substantial portion of the broader technology policy community has stepped up its efforts to raise awareness of the current debate surrounding the PROTECT-IP Act, pending in the Senate, and the Stop Online Piracy Act (SOPA), pending in the House. Our friends over at the Center for Democracy and Technology have produced an excellent summary of opposition views from a broad range of interested parties, including public-interest advocates like the EFF and ACLU, law professors, and the Global Network Initiative. We at StopBadware are squarely opposed to both bills.

Others have identified the manifold ways in which the bills violate traditional norms of notice, disregard procedural and substantive due process, seriously undermine hosting provider immunity under the CDA and DMCA, and threaten the health of the global Domain Name System. The House bill's co-sponsors, Reps. Lamar Smith (R-TX) and John Conyers (D-MI), seem oblivious to the the threat it poses, claiming it addresses "critical intellectual property issues that relate to national security, public health and safety, and the expansion of respect for intellectual property abroad".

To claim that the bill will meaningfully improve America's national security posture is preposterous on its face -- one must conflate risks to U.S. copyright holders with the national interest writ large — and, with the exception of rogue pharmacies, very few infringing websites facilitate threats to public health. But let's take SOPA’s sponsors at their word for a minute and consider it a given that they want to make a serious attempt to address these important issues. Why wouldn't they target websites distributing badware instead?

Let me be clear: even if, ceteris paribus, PROTECT-IP and SOPA targeted malware distributors rather than copyright infringers, they'd still be lousy bills. In all of its work, StopBadware strives to encourage private industry and regulators to respect the free speech and due process rights of Internet users, including web masters.

That said, Congress could do much more than it has to give the security community the power to take action (under strict judicial supervision) against operators of badware websites. Malware is indisputably a national security and public health issue. As we've mentioned before in The State of Badware, criminals can set up, co-opt, and maintain badware websites because control of the infrastructure that sustains them is split among webmasters, web hosting providers, ISPs, registrars, registries, and national governments. While security researchers can collect evidence of badware behavior on websites and inform appropriate parties (see our Best Practices for Badware Reporting for more on that), they have little power to compel these parties to take these reports seriously.

Imagine if Congress were to empower security researchers with civil causes of action like the ones PROTECT-IP and SOPA grant to copyright holders. For example, Congress might attempt any or all of the following:

  • require web hosting providers to disable access to malicious content they host;
  • require DNS providers to suspend nameserver services for domain names used primarily to spread badware;
  • require US-based registrars to suspend registrations of such domain names;
  • require US-based registries to revoke registrations of such domain names.

Drafting a statute like the above — one that respects issues of standing, free speech, and due process — would definitely pose a major challenge. (I suspect that's why Protect-IP and SOPA's sponsors made no attempt to do so.)  Practically speaking, bringing successful challenges against recalcitrant infrastructure operators could be an expensive, time-consuming endeavor. But it might produce better results than the status quo.

Why? A primary effect of the Computer Fraud and Abuse Act (18 U.S.C. 1030) is to make it a crime to infect computers with malware. From this we can infer that computer owners have a right to be free of malware. In practice, as we know, pursuing responsible parties, or even determining with certainty who they are, exceeds prosecutors' technical and logistical resources. It brings to mind an ancient and well-loved principle of equity — that there can be no right without a remedy. Congress should seriously consider creating remedies that support this right and enforce it against entities who are otherwise complicit.

An approach like the one I've sketched out isn't without its pitfalls. U.S. courts have very little experience with the malware threat landscape, and judicially sanctioned interventions against malware distributors have been chiefly limited to large botnet takedowns, frequently with the assistance of security researchers and large corporations. (See our write-up on the role of government and private parties in the Coreflood takedown here.) Most day-to-day badware website takedowns occur through private persuasion, not judicial compulsion.

Yet given the growth and persistence of badware websites, the security community should take a long, hard look at the existing system of badware report handling and ask itself if private self-regulation has been effective at stemming the tide of the malware onslaught. Consider the global WHOIS system, which was theoretically intended to link domain names and IP addresses to the people responsible for their use (or abuse). Any badware website reporter will tell you that WHOIS results are rarely the end of an inquiry, and frequently contain outdated or outright fraudulent information. This makes investigation much harder, and is still no guarantee that a complaint will receive an airing from any party, much less an appropriate resolution. In short, we have an accountability problem: one can cry 'malware' all one likes, and no one has to listen.

Not so in the courts. If fraud, deception, negligence, and organized crime are properly the province of the judiciary in meatspace, why not in cyberspace?

I present this image of government regulation to the cybersecurity community as an invitation to to prove to the U.S. government, and to the world, that it can bridge this trust and accountability gap themselves. StopBadware has always sought to help embody the change we seek in the Internet through voluntary, collaborative efforts with industry experts (as in our badware reporting and web hosting best practices work). But bills like PROTECT-IP and SOPA should remind us that when the Internet community fails to act, intrusive and ill-informed legislation may seek to ‘solve’ our problems for us.

In short, we agree with others in our community who have come out forcefully against PROTECT-IP and SOPA. Not only does it fail to solve the problems it identifies (and creates massive new ones), but reflects completely misguided thinking on the root causes of those problems and ignores approaches that might solve them. SOPA’s sponsors want to improve cybersecurity by giving copyright holders a license to 'kill' infringers with no notice. We'd have them do it by giving malware victims their day in court.

Supporting a voluntary code for ISPs

Earlier this week, we submitted comments in response to a request for information from the U.S. Departments of Commerce and Homeland Security. The topic was development of a voluntary code of conduct for industry, particularly ISPs, to help address botnets. The RFI follows similar national efforts in Australia, Germany, and Japan.

StopBadware, of course, already helps to reduce the threat of botnets by helping to prevent and clean up websites that deliver malware to end users. That said, there's much still to be done, and we support the approach broadly proposed by the government's RFI. Here's a brief summary of our comments:

  • Prevention of malware infection is multi-faceted, including everything from cleaning up badware websites to educating end users. We detail several of these facets, highlighting examples of effective tools and approaches within each.
  • When discussing industry-driven initiatives, it is critical to look to users' needs. We use our experience working with owners of compromised websites to suggest how industry can effectively meet the needs of users whose devices have been infected.
  • A voluntary code of conduct for ISPs is a good step, but there are several opportunities where pooled resources could do more than each industry player working independently. We suggest three such cases and argue that independent non-profit organizations are better suited than for-profit companies or government to offer such resources.

Here's the full set of comments. Please let us know if you have any additional thoughts on this topic!

Building upon Scott Charney's "public health" proposal

At the RSA security conference this week, Microsoft's Scott Charney continued advocating a "public health model" for fighting malware and other security threats. I missed Charney's keynote, but I spoke to a couple members of his Trustworthy Computing team on Monday, and I read Charney's blog post and some third party accounts of his presentation.

The public health metaphor is an interesting one, and one which folks like Joe St. Sauver at the University of Oregon have talked a bit about over the years. It's not a perfect metaphor, to be sure. As Scott acknowledges in his blog post, malware and related threats are a result of deliberate, malicious human action; diseases, in contrast, evolve naturally. Disease also spreads physically, not virtually, which changes a lot of the dynamics. The impact, and thus society's response, differs, too. There's a big difference between the death of a person and the death of a computer (or the theft of a person's account number).

Still, with all these critiques, public health is a decent model to turn to for lessons. Malware does, in many cases, follow patterns similar to an infectious disease, and epidemiologists have spent a lot of time developing strategies for tracking and interrupting the spread of such diseases. Many of the same techniques that work for protecting human health—immunization, reducing exposure, educating people, treating disease—have (or could have) clear analogues in computer security. Similarly, though we now lack on the security side many of the institutions and social structures that have developed over the past couple hundred years to address public health, there's no reason these couldn't be developed.

Therein lie the questions that are most interesting to me in this discussion. What institutions have to develop, and what new approaches do we need to adopt, to adequately protect a couple billion Internet users? Charney proposes, by way of example, a completely voluntary, private approach to enhancing security: a bank offering an option that checks for updated AV software before allowing a PC to log in. Within that idea are several related questions. Since a single bank adopting that strategy won't have that large an effect, is there a way to get all (or most) banks to adopt the same strategy? Can some organization develop a common framework for how this can be done effectively, consistently (to avoid consumer confusion when they switch banks), and responsibly? Is this something that can be entrusted to, and achieved by, industry, or does it require a third party to get involved? Is there a public policy requirement, or are there market incentives that can encourage banks to do their part?

There are a number of other questions that come to mind along similar lines. Can we learn anything from the community health model that has been successful in addressing certain diseases that target specific populations? What kinds of communities are relevant on the Internet, and how do they differ from those in physical space? How do we ensure that the most extreme (but potentially effective) interventions, such as quarantining an infected computer or network, are done with oversight, due process, and within clearly prescribed guidelines? Do we need to start mandating data reporting, much as we do with disease reporting, to ensure that those who need it have full access to the information they need to intervene? And who are those that will intervene? Industry players? Non-profits like StopBadware? Government agencies akin to the Centers for Disease Control and Prevention?

At StopBadware, we have some thoughts that we'll try to blog about over the next few months, and we're excited to engage in the conversation. We also welcome input from the community; please post your comments here or share them with us via e-mail, Facebook, Twitter, or—if we happen to cross paths—in person.