You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

The Security Burden Shouldn't Rest Solely on the Software User

Adam Berry/Getty

The below is the final installment in a five-part series. Part 1 explored the problems stemming from our collective unwillingness to hold software providers accountable for vulnerability-ridden code. Part 2 argued that the technical challenges associated with minimizing software vulnerabilities weigh in favor of, not against, imposing liability on software makers. Part 3 explained why leaving software security in the hands of the market is an idea about as bad as the average software user's cyber hygiene. Part 4 described why nothing short of rules on the books would change the current user-liability regime.

Noted computer security expert Daniel Geer thinks you should bear the costs of the insecure code you use. It’s nothing personal. He acknowledges that holding the end user responsible for being the “unwitting accomplice to constant crime” is far from a perfect cybersecurity strategy. But he concludes that for the time being, it is the least “worst” option:

If you say that it is the responsibility of Internet Service Providers (ISPs) — that “clean pipes” argument— then you are flatly giving up on not having your traffic inspected at a fine level of detail. If you say that it is the software manufacturer’s responsibility, we will soon need the digital equivalent of the Food and Drug Administration to set standards for efficacy and safety. If you say that it is the government’s responsibility, then the mythical Information Superhighway Driver’s License must soon follow. To my mind, personal responsibility for Internet safety is the worst choice, except for all the others.

Geer’s concern is well-founded: imposing security responsibilities on entities other than the end-user will no doubt abrogate some of the freedom and functionality that end users currently enjoy as consumers. This is true both with respect to software and Internet access specifically and with respect to computer information systems more generally.

But Geer’s conclusions are unnecessarily stark, in part because they assume that security is a pie that cannot be intelligently shared—a conclusion that, it should be noted, we would never be inclined to accept in our offline lives. Our physical security may ultimately be our burden to bear, but we expect fast food chains not to poison us, local police to do their rounds and our neighbors to call 9-1-1 if they see suspicious activity. Some invisible web of law, professional obligations and communal norms collude at all times to keep us alive and our property in our possession.

Would vesting ISPs with circumscribed security responsibilities—such as responding to or recording highly unusual traffic patterns that suggest an ongoing DDoS attack—require end-users to “flatly” relinquish data privacy? Many ISPs already implement limited security mechanisms, and carefully designed private-public data-sharing restrictions could go a long way toward addressing concerns about improper use of subscriber information.

Similarly, holding software providers accountable for their code need not entail exposing software providers to lawsuits for any and all vulnerabilities found in their products. Liability critics battle a straw man when they make arguments like this one, from computer security authority Roger Grimes: “If all software is imperfect and carries security bugs, that means that all software vendors—from one-person shops to global conglomerate corporations—would be liable for unintentional mistakes.”

Liability is a weapon far more nuanced than its critics believe. Geer and Grimes see liability as a big red button—a kind of nuclear option, to be avoided at all costs. Meanwhile proponents understand liability as a complex machine ideally outfitted with a number of smart levers.

Consider: software’s functions range from trivial to critical; security standards can be imposed at the development or testing stage, in the form of responsible patching practices or through obligations for timely disclosure of vulnerabilities or breaches; the code itself might be open-source or proprietary or in any case free. An effective liability regime is one that takes these many factors into account when it comes to designing rules, creating duties or imposing standards.

For starters, it would make no sense to hold all software providers to the same duty of care and the same liability for breach of that duty, irrespective of the software’s intended use and associated harms. As Bruce Schneier observed back in 2005, “Not all software needs to be built under the overview of a licensed software engineer . . . [but] commercial aircraft flight control software clearly requires certification of the responsible engineer.” All software embedded in life-critical systems or critical infrastructure should be consistently made subject to more rigorous standards than standard commercial software, and the manufacturers should be held liable either for harms caused by products that deviate from those standards, or for flaws that are not timely remediated.

Imposing this kind of liability will require restricting the disclaimers of warranty and limitations on remedies found in standard software license agreements. As far as recommendations go, this is a familiar rerun. In a 2007 report, the House of Lords Science and Technology Committee recommended that the European Union institute “a comprehensive framework of vendor liability and consumer protection,” one that imposes liability on software and hardware manufacturers “notwithstanding end user licensing agreements, in circumstances where negligence can be demonstrated.”

The tricky part, of course, is putting into place a system through which negligence can be reliably demonstrated. As a general principle, insofar as security can be understood as a process, rather than an end, negligence should be assessed based on failure to adhere to certain security standards rather than based on absolutes like the number of vulnerabilities in the software itself. Indeed, numbers might reveal more about the software’s popularity than its inherent insecurity. Any particular vulnerability might not prove a software program unacceptably defective—but an examination of the general processes and precautions through which the software was produced just might.

Laws merely establishing modest duties on the part of software makers—and subjecting them either to private suit or government fine in the event of harms resulting from breach—could help push the industry to develop and implement best practices. These practices could in turn constitute an affirmative defense against negligence claims. Best practices might range from periodic independent security audits to participation in what David Rice, Apple’s global director of security, describes as a ratings system for software security, an analogue to the National Highway Traffic Safety Administration’s rating system for automobile safety.

Already existing legislation that penalizes companies for failing to safeguard sensitive user information offers a useful model for imposing narrowly circumscribed security duties on software providers. For example, in 2006, the Indiana legislature enacted a statute that requires that the owner of any database containing personal electronic data disclose a security breach to potentially affected consumers but does not require any other affirmative act. The terms of the statute are decidedly narrow—it provides the state attorney general enforcement powers but affords the affected customer no private right of action against the database owner and imposes no duty to compensate affected individuals for inconvenience or potential credit-related harms in the wake of the breach.

There are other factors to consider in calibrating a liability system. Liability exposure should be to some extent contingent, for example, on the availability of the source code. It is difficult to imagine, for instance, a good argument for holding the contributors to an open source software project liable for their code. Whether or not you believe that the process by which open source software evolves actually constitutes its own security mechanism, a la Linus's Law ("given enough eyeballs, all bugs are shallow”), the fact is that open-source software offers users both the cake and the recipe. Users are free to examine the recipe and alter it at will. By offering users access to the source code, open source software makes users responsible for the source code—and unable to recover for harms.

Imposing liability on open-source software is not only an incoherent proposition, but it also has problematic First Amendment implications. Code is, after all, more than a good or service. It is also a language and a medium. And clumsily-imposed liability rules could place significant and unacceptable burdens on software speech and application-level innovation.

In her book on the relationship between internet architecture and innovation, Barbara van Schewick gives us some sense of why we should be wary of stifling liability laws with her description of the nexus between software applications and sheer human potential:

The importance of innovation in applications goes beyond its role in fostering economic growth. The Internet, as a general-purpose technology . . . creates value by enabling users to do the things they want or need to do. Applications are the tools that let users realize this value. For example, the Internet’s political, social or cultural potential—its potential to improve democratic discourse, to facilitate political organization and action, or to provide a decentralized environment for social and cultural interaction in which anyone can participate—is tightly linked to applications that help individuals, groups or organizations do more things or do them more efficiently, and not just in economic contexts but also in social, cultural or political contexts.

All of this applies, of course, to proprietary applications, but the liability calculus for closed-source software should come out a little different. When it comes to proprietary applications, the security of the code does not lie with users but remains instead entirely within the control of a commercial entity. The fact that much proprietary software is “free” should not foreclose liability: a narrowly tailored liability rule might provide that where users are “paying” for a software product or service with their data, a data breach that causes damages could be grounds for government-imposed fines or, to the extent it causes individuals to sustain harm, private damages.

This is by no means a comprehensive overview of all possible approaches to constructing a software liability regime. It is rather a glimpse of a few of the many levers we can push and pull to turn security from an afterthought to a priority for software makers. Such a change will come with costs, imposed on software makers and redistributed to us, the users. But we must keep in mind that whatever we pay in preventive costs today are low compared with what we could pay in remedial security costs tomorrow.

As a matter of routine, we accept inconveniences, costs and risk redistribution in other areas of our lives. Drugs are required to undergo clinical testing, food is inspected and occasionally “administratively detained,” and vaccines are taxed to compensate the small fraction of recipients who wills suffer an adverse reaction. Restrained measures to police software, too, can be understood as part of a commonsense tradeoff between what is cheap and functional and what is safe and secure.

*          *          *

This series was dedicated to moving past old questions: of whether software can be made more secure or whether providers have the capacity to improve the security of their products. The question is no longer even whether manufacturers—and by means of price increases and less timely releases, end users—should be compelled to bear the inconvenience and cost of these security enhancements. The question, and the challenge, lies in designing a liability regime that incentivizes software providers to improve the security of their products and compensates those unduly harmed by avoidable security oversights—without crippling the software industry or unacceptably burdening economic development, technological innovation, and free speech.

We don’t need a red button. That’s the beauty of the law—turns out we have plenty of rheostats at our disposal.