Change is hard. And improving security will create change. This creates resistance in various ways. One of my favorite types of resistance (eg, most frustrating) is the problem of induction. Let’s create a scenario where you find a gaping vulnerability (say, incorrect firewall rules, SQL injection vulnerability, architecture issues, whatever) and approach the owning organization on correcting this. Resistance comes in the form of debate. This debate is goofy and asks the wrong questions or assumes certain risks they do not understand such as
- “Is the risk that great? We’ve always had this and had zero issues!”
- “I don’t believe you. things are just dandy as they always have been!”
- “We’re the only ones who know, I accept the risk as it is low”
How do you win this debate? These questions aren’t necessarily the wrong questions to ask (I take back my above assertion) but more precisely their orientation is wrong. My newer tactics try to point this out. The classic “It’s always been a risk and hasn’t been a problem yet” is frustrating as it’s stubborn. Logic may nor may not help, depending on the individual. Let’s pretend it’s an honest and not an emotional or political debate (ha!). Applying observations (again, the problem of induction) to this problem is not acceptable for several reasons:
- Duration of observation is not enough to come to a meaningful conclusion. Sure, the sun will rise tomorrow. We have about 4.5 billion years of observations to reference and have a well rounded idea on the rate of occurrence for outliers (solar flares going out 1AU is really low but not impossible). This system has been deployed for (lets pretend) two years; of those two years the threat of being exploited as continually rised due to the proliferation of easy tools, malware, and the rise of internet-based crime. Indeed, security as a practice is to control the outliers in a sustained fashion.
- Your observation has blindspots. Chances are that their observations focus on functionality but not on security data. Recon, attempts, or full breaches may have gone unnoticed. This is a tough argument; I’d suggest having data to back you up.
- Risk does not mean what you think it means. The risk may have a low chance of occurrence, yet if it has a high severity then it is debatable that the risk is “high”. This is exactly the point that should be persuaded.
“Persuaded” is a good word. This isn’t a logic puzzle or a dissertation on how security works. Defeating such false arguments are a means to an end. At the end of the day the business must need to understand the risk versus cost. And so do you.
What other false arguments exist and how do you battle them?