What should happen when a security researcher discovers a vulnerability in a popular software project? Should details of the vulnerability be released so that users can protect themselves? Or should it be kept secret so that bad actors can’t exploit it?
There are drawbacks to both approaches. Immediate full disclosure puts users at risk. Bad actors find out about the vulnerability too, and there isn’t a lot users can do to protect themselves until patches are released. In contrast, secrecy allows software developers to ignore vulnerabilities, and there is no guarantee that bad actors don’t know about them already.
The industry has, for the most part, settled on a hybrid approach called responsible disclosure. When a security researcher finds a vulnerability, they inform the software’s developers but they don’t go public immediately. The developer is given time to release patches. Once the patches are released, the vulnerability is publicized so that users know to update. If the developer fails to release a patch in a reasonable amount of time, the vulnerability is disclosed to users so they can protect themselves. The amount of time given to developers varies; Google’s Project Zero allows 90 days.
Responsible disclosure attempts to balance competing goods. Users who are in the dark about vulnerabilities can’t respond to the threat, but immediate disclosure gives bad actors an advantage. Secrecy might prevent widespread exploitation of a vulnerability before it’s been patched, but developers, especially developers of proprietary software, may not be inclined to invest time and money into bug fixes for vulnerabilities no one knows about. Responsible disclosure is the golden mean between complete transparency and security by obscurity.
Responsible disclosure depends on the assumption that software is updated when patches are released. Secrecy following discovery is justified by the risk disclosure poses to users. They would be exposed without any way to fix the problem. Delayed disclosure is justified by the belief that once patches are available, users are safe.
But what happens when users don’t update? They are in as much peril as if the vulnerability had been exposed without patches having been released. Bad actors know all about the vulnerability, including, in the case of open source software, exactly which code was vulnerable and how to exploit it. Unfortunately, failing to patch isn’t rare: many recent data leaks and security breaches were the result of the exploitation of known vulnerabilities for which patches were widely available.
The point is this: over many years, a system for handling vulnerabilities has evolved, a system which aims to keep software users as safe as possible. Developers, security researchers, and corporations cooperate to minimize the risk to users. But users — businesses, server administrators, hosting providers — have a vital role to play. They have to update their software when patches become available. If they don’t, they put their business, their customers, and the wider population at risk.