Should You Worry About Medium/Low Risk Vulnerabilities?
Let’s say you just received a penetration test report from a company and you are working with your internal IT team or development team to triage and fix the issues raised. Someone on your team is of the mindset that fixing the medium/low priority issues in report isn’t even worth the amount of resources it would take to implement solutions. Even though this might seem a little dangerous at first thought, a lot of folks probably couldn’t articulate exactly why. Today, we’re going to try to help you articulate the problem with leaving medium/low risk vulnerabilities hanging around for an extended period of time.
So first and foremost, different organizations use different ranking scales for the risk related to security vulnerabilities, but let’s assume we’re talking about the CVSS 3.0 scale here. Medium and low risk issues are those that fall below a total score of 7.0. With most penetration tests, this is going to be the bulk of your results (hopefully) and they can represent a huge variety of problems, from high impact/low likelihood to low impact/high likelihood, etc.
Why should you care about security-related edge cases or small potatoes when you’ve got new features waiting and other high visibility projects to get to? To put it simply, it can be equated to death by 1000 paper cuts. You’ve got to consider not just the vulnerability itself, but the ability of an attacker to chain that vulnerability with others, thereby increasing the impact to your organization. Let’s talk through one short example:
If your organization’s primary customer-facing web application has a Cross-Site Request Forgery (CSRF) vulnerability associated with it, it very well may be a medium-risk vulnerability (depending on a number of factors) just because it requires social engineering to pull off. Or so you think. But let’s say a new feature you implement has a cross-site scripting (XSS) vulnerability in it that you’re not aware of until your next annual penetration test. An attacker can chain a XSS exploit with a CSRF vulnerability to avoid having to steal a session token and impersonate a user. Instead, they can just skip straight to issuing a request on the authenticated user’s behalf in the background, leveraging the CSRF issue so no interaction from the user is required. The attacker then just bypassed several mitigating controls and potential detection scenarios, straight to potentially creating themselves an account or issuing a system command.
Of course that is one specific incident, but it can happen with a number of other vulnerabilities. That XSS exploit can be partially disarmed to make session hijacking more difficult by addressing a low priority issue that provides defense-in-depth (setting the HTTPOnly flag on session cookies).
Your consideration process for what to fix and in what order should leverage an ROI-style analysis, whereby you evaluate the amount of resources required to implement a fix and understand what you get in return. If you can avoid high impact exploits from a quick fix, even if chances are remote, why wouldn’t you? Additionally, if something takes more resources or requires implementation of a new feature/tool to improve security posture but not necessarily fix a vulnerability, sometimes that can wait for a more planned rollout and doesn’t need an emergency fix.
As penetration testers, we try and help you through this decision making process, and we assign criticalities in the first place to help guide this process. We also try to take your specific organization into account for recommendations on remediations based on what we know from the test. But just because something isn’t a high risk doesn’t mean it’s not a risk. If you need help making this argument or are unsure of what to do in a specific situation, let us know and we’d be happy to discuss your options.