It's generally considered there are two ways to disclose vulnerabilities. Responsible disclosure involves giving the vendor a quiet heads up along with time to fix the problem. Full, immediate, disclosure doesn't provide that opportunity as the vulnerability is made publicly available immediately. You could consider this is irresponsible disclosure although that's a subjective point.
Where I say "vendor" here, I'm referring to the person that owns / runs the system in which the vulnerability was found.
Say you discovered a problem with a vendor's system that allowed you to download all customer records. The responsible thing to do is to contact the vendor and let them know, while at the same time not taking a copy of any of the data. A security conscious vendor should respond, possibly asking for further details, and will aim to fix the problem as soon as possible.
Meanwhile, the security researcher doesn't act on the information (data theft) or mention it widely to others (protecting the vendor, and the data, to some extent).
How long do you give the vendor?
There's no hard and fast rule on the time period to give a vendor. I've seen a consensus that 90 days should be enough time and I'd certainly consider that fair. In the event a vendor needs longer it's reasonable to expect them to keep in touch with the researcher, explaining any delays or barriers. Equally, if the vendor goes silent and is making no attempt to resolve the problem I have seen people publish their findings early or on the dot of 90 days.
My own experiences
I've been on both sides of the fence for this issue. A few years ago I reported a direct object reference vulnerability in the orders section of one of our suppliers. After changing the order ID I was able to see how much someone had paid for their order, something potentially sensitive as the supplier could be selling the product at different rates. Reviewing the HTML source I could see the customer name and address too, so there was also information disclosure. The supplier had the issue fixed in under an hour, which was pretty good if you ask me! I've written about avoiding direct object reference problems before.
One of the first things I did when working on eVitabu was to put a security policy together that outlined how we'd respond to incidents. After about a year I received an email from a security researcher suggesting there was a configuration information disclosure on our website, and he recommended we removed it. Fortunately this wasn't an issue with eVitabu but was instead a problem with the main website - a page had been left that ran
phpinfo(), revealing the configuration. This was quick to fix and we thanked the researcher for the information.
How do you report the issue?
If an organisation has a security policy then you're off to a good start, as often that will outline who to contact and what the company expects. It'll often also explain how you'll be treated (no legal action if reporting responsibly for example). The eVitabu security policy is quite brief and covers the above areas, whereas Tesla's is more verbose.
Another method is to look at the security.txt file for the site. Sadly this is not yet a standard, but is being implemented more widely. For example, you can see my security.txt here. The file is supposed to be at
/.well-known/security.txt (I need to correct my config it seems!).
Do I get a reward?
Sometimes. In the case of the main website I mentioned above the researcher was added to our security hall of fame. A number of companies now run a bug bounty programme, where there are (cash) rewards in addition to just recognition.
Can responsible disclosure go wrong?
Absolutely! After a security researcher reported a vulnerability to the City of York Council they were met with a hostile response. York contacted the police (who, sensibly, took no action against the researcher) and ignored some of the key information provided by the researcher. Their treatment upset many in the cyber security community, with many voicing opinions on how this was "not the correct approach".
York's One Planet York app was the cause of the problem because personal data for other citizens was sometimes provided to other users. It was reported on by the BBC, in York's local press and the security researcher's company, RapidSpike, released a statement explaining the truth behind the situation. The researcher had stolen no data, had disclosed responsibly, and was vilified for no reason.
I actually watched some of the council meeting where this was discussed, and the councillors continued with the vilification, complaining the issue should have been responsibly disclosed. York completely mishandled the issue.
(York aren't alone, but they are an organisation from my home country.)
Why publish before a fix?
Sometimes a vendor isn't interested in the problem, and they claim "there is no issue". At that point the researcher may consider the safest thing to do, for everyone else, is publicly disclose the vulnerability. For the researcher's sake, hopefully the vendor has publicly turned down their offer of responsible disclosure (then there can be no doubt they tried from the public).
Another option is the issue may be so serious the researcher believes people need to know. For example, if there's a bug that allows data to be stolen, or an account to be impersonated, it may be better for people to remove their data before the vendor has resolved the issue.
Then there's personal choice - some researchers just choose to publish first.
There's been an increase in organisations handling data leak and vulnerability reports in a good way, and I'm really hoping that continues. The severity of the problem may cause a researcher to publicly release details sooner rather than later, as may mistreatment. Ultimately, vendors need to work with cyber security professionals, rather than against us, in order to protect people's data.
Banner image a word cloud based on some words that sprang to mind.