Jump to content

Other companies can learn from Microsoft's vunerability mistakes


steven36

Recommended Posts

Recently, there have been a couple of Microsoft vulnerability disclosures that were problematic. When a security researcher finds a nasty bug, it's not always obvious what to do.

On December 30, Google's Project Zero released details--including a working exploit--for a vulnerability in Windows 8.1. This was problematic because Google held to a strict 90-day timetable for releasing the details, regardless of whether or not Microsoft had released a patch, and the 90 days expired in the middle of the winter holidays.

On January 5, security researcher Paul Price released details on a web API for Moonpig.com that potentially exposed the data of millions of customers. While he had communicated with them, they had not fixed the problem within a year's time and were not very communicative. This leaves the researcher with the difficult decision to keep the details secret or to put pressure on the vendor by exposing the customers. Price chose the latter.

Finally, Google stuck to their guns again on the 90-day timeline, this time releasing details just two days before the patch was released in Microsoft's regular "patch Tuesday" cycle. This despite senior Microsoft staff trying to reach out and achieve a different outcome.

Microsoft and Google, along with folks like Adobe, Facebook, Apple, and Amazon are among the world's biggest and best software producers, despite the recent friction, they have mature ways of handling vulnerability reports. The most interesting vulnerability disclosure dilemmas are actually about that huge group the consists of everybody else. More firms resemble Moonpig than Microsoft.

Knock knock, is anyone home?

Two common complaints from security researchers are:

  • It is difficult to find the right place to report a vulnerability.
  • Vendors communicate little or nothing about how they intend to address the issue.

Imagine a hotel chain or a movie theatre chain that uses vulnerable software online for managing bookings. They probably don't write the software themselves, and they don't publish a way to report vulnerabilities in the software they use. Their customer service staff are not trained to deal with web vulnerabilities. If someone finds a problem and contacts the support at the hotel or movie theatre chain, it is hard to get the right information into the hands of the people who can use it.

Likewise, many firms online are inexperienced in dealing with software vulnerabilities. They may not understand that security researchers can and will disclose the details publicly. Thus, they may see no value in communicating with researchers to assure them that the information was received and is being acted on. They might be naïve in their understanding of the impact of the vulnerability, believing that only a handful of elite hackers could possibly cause them grief, or that by silencing the one reporting researcher they can stop the vulnerability from being exploited by anyone.

What does a security researcher do if no one from the vendor replies? Do they issue ultimatums? Do they release exploits publicly anyways? Do they sit quietly? Should they sell the exploit to a vulnerability marketplace?

The two sides of public pressure

Proponents of public naming and shaming point out that releasing an exploit puts public pressure on vendors to fix vulnerable software. If the vendor can produce a quick fix, everyone wins. If, however, a quick fix isn't possible, then it is likely that end users become collateral damage when the vulnerabilities are exploited.

On the flip side, malicious attackers might be actively exploiting a vulnerability, even though it is not well known publicly. No one can credibly assert that only one person or firm knows about an exploit. A software vendor considering a security researcher's report cannot justify a slow response by assuming that no one else knows about the problem.

There is a chance that a non-public bug might be actively exploited anyways. Making a bug public almost guarantees that will be exploited widely. Is the impact--mainly to innocent users of a vulnerable service--worth it in order to pressure the vendor? Does a researcher publicly disclosing a vulnerability bear some ethical or moral responsibility for the damage to innocent users that results?

A cure worse than the disease

The other questions is whether something can be fixed safely at all. If a software vendor is forced to fix a complex vulnerability on a really fast timeline, there is a good chance that they will make a mistake. The patch could actually break features that had been working, regardless of whether or not it actually fixes the vulnerability. Obviously the faster the fix is thrown together, the more likely the fix is poorly tested. Often there are stakeholders that aren't immediately obvious to outsiders like security researchers. An online retailer may have relationships with upstream or downstream data providers, and "fixing" the vulnerability might stop critical sales functions from working. Fixing it might have real costs in lost revenue. If a vendor like Microsoft pushes a hasty patch for a security issue, only to have it significantly break some part of Windows that had previously worked, users will not be grateful. It is one thing, from the outside, to say "this is an easy fix". It is quite another thing to put a company's reputation on the line by pushing a slapdash patch that actually causes more harm than exploiting the vulnerability would.

What is fair?

Disclosing vulnerabilities the right way is the subject for another blog. But some principles are clear:

Researchers must strive to reach the correct vendor. This might mean multiple emails, multiple "contact us" web forms, as well as reaching out to social media, or even telephone. A researcher who wants to say "they didn't respond" should back that claim with a long dossier of reasonable attempts to contact the firm.

Vendors must have a process for receiving and acting on security reports. This process needs to include a means of communicating progress or status back to the researcher who reported it. This is a two-way street, and firms need to be trustworthy, clear, and predictable in their discussions of security vulnerabilities.

Discretion is important

Discretion is valuable in commercial dealings. There are real reasons be careful about admitting culpability and making promises that can come back to harm the business. On the other hand, saying nothing is not usually the best bet in today's hyper-connected, instant-opinion Internet. A regular drumbeat of contact with real substance will go a long way to garnering good will.

We expect software to have bugs. What sets software firms apart is how they deal with them.

Source
Link to comment
Share on other sites


  • Replies 2
  • Views 781
  • Created
  • Last Reply

Totally agree with you, but now Google denies to provide fix for the 2/3 of its user base, just because it is outdated (4.3 or lesser). Bruh! Double standards by Google.

Link to comment
Share on other sites


Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...