The Maintenance Bubble


Ransomware is a thoroughly buzz-word-ified problem. In my current position as "random dev at an infosec consulting company", I have been the recipient of some truly misguided marketing pitches that both presume I have purchasing power within my company, and presume I believe the salesperson when he tells me his company can protect completely against ransomware attacks. Sure, bro! But having come from an enterprise position, I get it. There's a web of paranoia that surrounds the concept of data loss, and ransomware makes that worse by being so malicious: DR drills might cover an earthquake, but who's to say ransomware wouldn't make the jump to the second location? Not me, and probably not my old job's sysadmins, either.

Atlanta is currently illustrating for the industry what a worst-case scenario cleanup looks like: $9.5m and counting. Katie Moussouris on Twitter points out that a significant portion of Atlanta's costs can likely be attributed to cleaning up tech debt. If you consider the kind of legacy systems a lot of organizations run on - Windows 2003 servers, busted mainframes, ancient versions of open-source frameworks, and so on - then that price tag starts to look a bit more reasonable. The question I'm interested in, then, is: at what point in time does this sort of recovery cost create, effectively, an economic bubble?

Programmers are notoriously highly paid (hi!). But the development I do for my company should be able to be sold for many, many times more than what I'm actually paid, and maintained in a fraction of the time it took to build it. The old adage is that software scales. But, of course, software development and maintenance looks very different from each side of the vendor/client wall. Governments, non-profits, and large non-tech corporations all treat tech as a cost center. Programmers with the same skills and many of the same duties as vendor developers might expect similar pay and benefits, but for the most part their skills and what they produce are not viewed as being part of the central "purpose" of the company or government. This means, inevitably, that corners get cut.

If start-ups are often a buffet of expensive hardware and software, larger organizations are more like being told to go hunt and gather in a wasteland. In my short time doing enterprise software development, I watched a project drag out over twice as long as it was scoped due to an initial refusal to invest in a true upgrade. The project in question was mission-critical and ran on a deprecated operating system. In addition to hundreds of thousands in budget overrun, the time delay meant mounting - thankfully unrealized - security risk.

Target's breach brought the cost of poor security into the public mindset, but the technical reality is, of course, much more complicated. Pentests can uncover vulnerabilities, and that's very useful for companies when there is executive will and internal expertise to actually fix the vulnerabilities. But too often in large organizations - both for-profit and government - executive will is lacking. And almost invariably, you'd need a stable of experts with Tony Stark-esque intellect to accurately scope the changes needed to turn a tangle of vulnerable, poorly maintained, legacy applications into a hardened ecosystem.

So: billions in risk. Hospitals, cities, companies as sitting ducks. A market for ransomware that is only growing. Underpinning all of this are artificially low costs, wrung out of IT budgets by virtue of cutting corners in hiring, in obsolescence management, in disaster recovery, in vendor upgrades. Perhaps instead of asking why Atlanta's costs are so high, we should be asking everyone else why their costs are so low.