Professionalism/Software Vulnerability Disclosure

Introduction

edit

Software is rarely flawless. A huge cybersecurity workforce shortage and inadequate security education for programmers creates a perfect environment for development of insecure systems.[1] Business competition shifts the focus of companies away from quality to speed, letting software flaws slip through. The flaws can resurface later while the system is being actively utilized by many users.

Software vulnerability disclosure refers to the act of releasing the details of a security flaw in an application to a certain audience. In the industry, these flaws are frequently referred to as "zero-day vulnerabilities" (or "zero-day exploits" when weaponized). Depending on the communication method and the audience, this disclosure can lead to a completely different outcome. Value can be gained from the knowledge of these flaws, which turns them into "vulnerability equity."

Practical Considerations

edit

The rediscovery rate of vulnerabilities is very high. Researchers estimate that 15% to 20% of all vulnerabilities are discovered independently at least twice within a year, more than previously thought.[2] Certain methods of disclosure or lack of it can cause the vulnerability to remain unfixed, putting every single user of that system at a big risk.

In the recent past, many organizations believed that as long as no one saw what is inside the code, it was secure. This is known as "security through obscurity." [3] If someone found a vulnerability, it was common to either ignore the researchers or threat them with legal action, leaving the vulnerability unpatched. In 2011, an Australian security expert Patrick Webster, who worked as a security analyst at a police department, found a serious security flaw in software owned by First State Super. While working with the system, he found a flaw allowing users to pull full names, addresses, age, insurance information, fund amounts, beneficiaries and employer information from all of the site's 770,000 users - including Police officers and politicians. Upon discovery, he immediately notified the company thinking that he was doing a good deed. Within days, his account was disabled, computers were confiscated, and he was claimed to be liable for any costs in fixing the flaw.[4] Finding a flaw to some extent implies that you attacked the system. Without explicit written permission or "bug bounty" programs, these findings put researchers in a legal gray zone. Webster's mistake was that he actually downloaded the data to his computer while looking at the flaw, which was used against him by First State. This retaliation causes an adverse effect where researchers are afraid to report flaws they find. Fortunately, new laws are being passed protecting the rights of these whistleblowers.[5]

Upon disclosure, there is often a lack of action from the vendor. James Glenn, a security researcher in Bulgaria found a flaw in a Cisco video surveillance system. After presenting the findings in a detailed report to his supervisor and to Cisco's incident response team, no action was taken. After following up and meeting with Cisco representatives, he was fired within 3 days. The problem was fixed only years later, after the US government filed a lawsuit against Cisco.[6]

Disclosure Options

edit

Upon intentional or unintentional discovery of a bug, a researcher has several options and paths of disclosure.

  • Do nothing. This leaves every user of the system at risk. If the researcher gets hacked, the attackers can steal the vulnerability as use it themselves. This has happened to governments, including the United States, who stockpile vulnerabilities rationalizing that they might have a need for them in the future.[7]
  • Report the flaw to the vendor. This puts the researchers at the mercy of the company, which may then reward or sue them. The monetary rewards are generally smaller in comparison to other buyers. The companies are invested in keeping their data safe, and a breach can be very hurtful to their reputation, but they often underestimate the risks.
  • Releasing the vulnerability publicly. This has obvious negative implications that now anyone (even those with no security experience, known as "script kiddies") can take this exploit and use it, as there is no existing patch. It is even worse when some of these vulnerabilities take a really long time to fix.
  • Sell the vulnerability on a grey/black market. This puts the exploit in the hands of a nation state or a malicious actor who may want it to execute their agenda. The payouts are generally the highest, but the researcher loses control over how the vulnerability is used.

Each of these options has a different side effect. One can fight to fix the flaw quickly, get rich, or stay put and avoid any career risks. Each of these options has it's own negative consequences, so the benefits have to be weighted. Based on his or her own agenda and understanding of ethics, each researcher has to make this choice for themselves, and almost any of the choices above can be justified. These mindsets can be categorized into 3 general motives: altruism, profit, or politics.

Public Disclosure

edit

Nowadays, a researcher may elect to publish a zero-day vulnerability online. Often, this is done for bragging rights rather than with malicious intent. This means of disclosure may also be chosen if the researcher has not received great enough concern or attention over the vulnerability. This was the case for the Michele Thompson, an Arizona mother who published a vulnerability in Apple's FaceTime app on twitter after days of attempting to contact the tech giant. Thompson's 14 year old son had discovered a method by which FaceTime could be used to receive audio from a remote source without alerting the contacted phone.[8] When she realized her calls were falling on deaf ears, Thompson turned to Twitter to reveal the security issue, which quickly raised the issue's priority to Apple. Although it is less common than responsible disclosure, public disclosure represents a viable method for giving critical issues notoriety. This method does open the researcher up to potential legal action taken by a company claiming the researcher illegally hacked their products.

Responsible Disclosure

edit

Currently, the most common method for disclosing zero day vulnerabilities is a sort of bargaining method, wherein the researcher enters a contract with the necessary party. This contract gives the company a set amount of time (usually 90 days due to the precedent set by Google's "Project Zero") to replicate and solve the vulnerability. After the time period has elapsed, the researcher can freely publish the vulnerability for bragging rights or any other purposes. Ideally, this allows a client time to fix an issue before the vulnerability is published and can be abused. In some cases however, responsible disclosure agreements may not allot enough time for a company to fix a more complex vulnerability, or a client may simply deem a vulnerability too low priority to work through. The responsible disclosure process can also be arduous for researchers, who may suffer through days or weeks of tedious communication to see results. Some companies offer standing bounties for researchers who find vulnerabilities in their systems. Microsoft is currently offering up to $300,000 for vulnerabilities in its Azure cloud services. This can however lead to companies sorting through many complex vulnerability reports, leading to some inevitably being dropped. Responsible disclosure practices have naturally led to the formation of larger companies which create their own vulnerability pricing models and timeframes, which allows clients to avoid the difficult process of negotiating bug pricing with researchers. These programs, like Zerodium and Project Zero, have given responsive disclosure more structure.

Irresponsible Disclosure

edit

Irresponsible disclosure is any way of disclosing a vulnerability other than notifying the company or vendor that is responsible for the software. It can come in many different forms, and usually comes as a result of a researcher looking for personal gain, whether it be through payment or even bragging rights. This is where the ethical issues of software vulnerability disclosure lie, as researchers face making a decision to either act morally or instead do something unethical in pursuit of personal gain. One security researcher with the moniker "SandboxEscaper" posted a zero day vulnerability on twitter that exploited a flaw in Windows. She seemed to be frustrated with Microsoft's bug bounty program, which shows how important it is for companies to have usable bug reporting system in place. Companies have also been at fault for irresponsible disclosure. MedSec, a cybersecurity firm, found a vulnerability in medical equipment from St Jude. Rather than notify St. Jude, they partnered with an investment firm to short St. Jude stock before they disclosed this vulnerability, effectively profiting off of their discovery.[9] Many in the security field opposed this move, as protocol would have called for MedSec to notify St. Jude first after their discovery.

Failure to Disclose

edit

In some instances, researchers or organizations find a vulnerability, and then choose to not disclose it any capacity, and make use of the vulnerability themselves. A common case in which this occurs is when a government agency or company comes across a vulnerability, and sees an opportunity in which they can use the exploit to their own advantage, typically by gathering more intelligence about competitors/nation states. One notable instance of this case came from the NSA. The NSA became aware of a very severe exploit that affected many different versions of Windows. Rather than notify Microsoft, they used the exploit in a tool called EteranlBlue, which they used for nearly five years for classified purposes.[10] In 2017, code from this tool was leaked and then used in a ransomware attack known as WannaCry, which was one of the largest cyber attacks ever.[11] The NSA only alerted Microsoft of this exploit after the code became leaked, and faced heavy criticism from the technology community.

Vulnerability Markets

edit

Many researchers choose to sell their vulnerability on third party markets, usually looking for a larger payout than those that choose to responsibly disclose vulnerabilities. Researchers divide these markets into three categories: white, grey, and black markets. White markets include things such as bug bounty programs or responsibly disclosing vulnerabilities to necessary parties. Researchers that choose to sell on these markets can expect a large payout, and should not expect the vulnerability to be addressed anytime soon.

Grey Markets

edit

Grey markets are mainly a medium in which government agencies, defense contractors, and other brokers can purchase software vulnerabilities. Typically, some kind of entity acts as a intermediary so that both the buyer and seller can remain anonymous. Transactions that take place in this market are technically legal, and the buyers tend to pay less than those on the black market.

Black Markets

edit

Black markets are more nefarious in nature, typically involving parties who want to use software vulnerabilities for some sort of illegal purpose. Criminal organizations are typically the buyers on this market, although some government agencies whose needs cannot be met on the grey market may use the black market. Transactions on this market are almost always illegal, and take place on the dark web: the part of the internet not indexed by typical browsers. As a result, it is easier to remain anonymous on these markets. Transactions on the black market also pay the highest on average.

References

edit
  1. Crumpler, W., & Lewis, J. A. (2019, January 29). The cybersecurity workforce gap. CSIS. https://www.csis.org/analysis/cybersecurity-workforce-gap
  2. Herr, T., Schneier, B., & Morris, C. (2017, July). Taking Stock: Estimating Vulnerability Rediscovery. Harvard Kennedy School Belfer Center. https://www.belfercenter.org/sites/default/files/files/publication/Vulnerability%20Rediscovery%20%28belfer-revision%29.pdf
  3. SecurityTrails. (2020, February 13). Security through obscurity. https://securitytrails.com/blog/security-through-obscurity
  4. Whittaker, Z. (2018, February 19). Lawsuits threaten infosec research — just when we need it most. ZDNet. https://www.zdnet.com/article/chilling-effect-lawsuits-threaten-security-research-need-it-most/
  5. Rodriguez, K., Opsahl, K., Cardozo, N., Williams, J., Ugarte, R., & Israel, T. (2018, October 16). Protecting security researchers' rights in the Americas. Electronic Frontier Foundation. https://www.eff.org/wp/protecting-security-researchers-rights-americas
  6. Goodwin, B. (2019, August 9). Whistleblowers: James Glenn’s battle with Cisco opens new front on cyber security. Computer Weekly. https://www.computerweekly.com/news/252468089/Whistleblowers-James-Glenns-battle-with-Cisco-opens-new-front-on-cyber-security
  7. Chappell, B. (2017, May 15). WannaCry ransomware: Microsoft calls out NSA for 'stockpiling' vulnerabilities. NPR. https://www.npr.org/sections/thetwo-way/2017/05/15/528439968/wannacry-ransomware-microsoft-calls-out-nsa-for-stockpiling-vulnerabilities
  8. McMillan, R. (2019, January 29). Teenager and his mom tried to warn Apple of FaceTime bug. The Wall Street Journal. https://www.wsj.com/articles/teenager-and-his-mom-tried-to-warn-apple-of-facetime-bug-11548783393
  9. Bone, J. (n.d.). Independent research firm confirms St. Jude security vulnerabilities. MedSec. https://medsec.com/entries/stj-lawsuit-response.html
  10. Burgess, M. (2017, June 28). Everything you need to know about EternalBlue – the NSA exploit linked to Petya. Wired. https://www.wired.co.uk/article/what-is-eternal-blue-exploit-vulnerability-patch
  11. Sherr, I. (2017, May 19). WannaCry ransomware: everything you need to know. Cnet. https://www.cnet.com/news/wannacry-wannacrypt-uiwix-ransomware-everything-you-need-to-know/