The Cost of Cyberattacks Is Less than You Might Think
Interesting research from Sasha Romanosky at RAND:
Abstract: In 2013, the US President signed an executive order designed to help secure the nation's critical infrastructure from cyberattacks. As part of that order, he directed the National Institute for Standards and Technology (NIST) to develop a framework that would become an authoritative source for information security best practices. Because adoption of the framework is voluntary, it faces the challenge of incentivizing firms to follow along. Will frameworks such as that proposed by NIST really induce firms to adopt better security controls? And if not, why? This research seeks to examine the composition and costs of cyber events, and attempts to address whether or not there exist incentives for firms to improve their security practices and reduce the risk of attack. Specifically, we examine a sample of over 12 000 cyber events that include data breaches, security incidents, privacy violations, and phishing crimes. First, we analyze the characteristics of these breaches (such as causes and types of information compromised). We then examine the breach and litigation rate, by industry, and identify the industries that incur the greatest costs from cyber events. We then compare these costs to bad debts and fraud within other industries. The findings suggest that public concerns regarding the increasing rates of breaches and legal actions may be excessive compared to the relatively modest financial impact to firms that suffer these events. Public concerns regarding the increasing rates of breaches and legal actions, conflict, however, with our findings that show a much smaller financial impact to firms that suffer these events. Specifically, we find that the cost of a typical cyber incident in our sample is less than $200 000 (about the same as the firm's annual IT security budget), and that this represents only 0.4% of their estimated annual revenues.
The result is that it often makes business sense to underspend on cybersecurity and just pay the costs of breaches:
Romanosky analyzed 12,000 incident reports and found that typically they only account for 0.4 per cent of a company's annual revenues. That compares to billing fraud, which averages at 5 per cent, or retail shrinkage (ie, shoplifting and insider theft), which accounts for 1.3 per cent of revenues.
As for reputational damage, Romanosky found that it was almost impossible to quantify. He spoke to many executives and none of them could give a reliable metric for how to measure the PR cost of a public failure of IT security systems.
He also noted that the effects of a data incident typically don't have many ramifications on the stock price of a company in the long term. Under the circumstances, it doesn't make a lot of sense to invest too much in cyber security.
What's being left out of these costs are the externalities. Yes, the costs to a company of a cyberattack are low to them, but there are often substantial additional costs borne by other people. The way to look at this is not to conclude that cybersecurity isn't really a problem, but instead that there is a significant market failure that governments need to address.
My problem with most companies' attitude towards cyber-security was never that they were losing money, but that they just lost a lot of sensitive data from a lot of people that they had pledged to protect, and that they can avoid almost all consequences by simply going "Well, that's the internet for ya!". In the meantime, users have to change passwords and security questions, submit to credit monitoring, etc., or deal with the fraud that will otherwise ensue...
All it would take would be a law requiring all databases must also include complete name, address, SSN, fingerprints, retina scan, DOB, phone numbers, PINs, email addresses, passwords, credit card information, bank account information, etc., for every employee of 'manager' level or above, and all members of the Board of Directors, plus their spouses, both sets of parents, children and grandchildren. Make the consequences of a data breach personal and it'll be prevented. :-)
Breach, that leads to the leak of costumers private data, should count as recklessness and be fined greatly by the authorities. Companies, that fail to secure its costumers' data and get hacked constantly, should be forbidden to store sensitive data inside their servers -- if you can't secure it, outsource to the 3rd party companies that know what they are doing. While we treat hacked companies as victims, I don't see any way to force them to take security seriously.
Not only is the cost on average cheaper than employee theft it has other asspects that make spending money on security rather pointless above the "lip service" level.
From a business point of view noticed being attacked is effectively random. Also even with APT the consequences are always in "future time". Thus with deflation on the sunk costs of prevention the cost of a breach is almost always less in the future than now.
We see this nonsense with the big accountancy firms, who when caught breaking the rules spin the regulator out for years. The "guilty partners" are often retired by then thus their only worry is personal fines (that the firm picks up for them). The fines the firm pays may sound large but infact are general small compared to income from re-investment over the five to ten years they have spun the regulator out.
Thus the longer the "breach" is not acknowledged the larger the profit/dividends than if the breach is discovered and acknowledged in a timely manner and restitution made at the time.
Further if the breach looks significant the "guilry parties" can leave with a clean record before it geys acknowledged. Those that come in behind them generally don't want to know either so can hide behind "not informed" etc.
The only UK company I can think of where the incoming replacment went public with "accounting irregularities" was Tesco.
Thus in many ways "Doing a Nelson" of turning a blind eye for as long as possible is the most profitable thing to do, and carries few of those pesky "sunk costs" that shareholders hate.
"[Romanosky]spoke to many executives and none of them could give a reliable metric for how to measure the PR cost of a public failure of IT security systems."
I bet an andrew jackson to a donut that those executives' thinking on the question doesn't include figuring out why non-customers consistently reject marketing incentives to jump aboard.
There are certain companies that do not get my business - and will never get my business - because of various considerations that matter to me: insecure/opaque security measures; opaque policies; outrageous TOS; child labor; unfair labor practices; price; other reasons. None of these companies - not a single one - knows why I can't be lured; they just doggedly sing the same marketing tune every n weeks/months. They just tell; they never ask. I have not gone pro-active, spelled it out for them, because the closest thing any of them has in place for accepting INPUT is tech. support. I conclude that they do not want to know.
Don't forget the costs incurred by coming up with a new happy-folksy jingle assuring the customer that "the only reason we even go to work each morning is the chance that you'll stop by and make our whole day".
"Crawfish-WeaselCorp. Making YOUR _____ experience mega-joyjoy is the ONLY reason we even exist!"
Then there's the expense of changing their name to "CWC International", flying a few congressbribees to the Bahamas for a blowjob, etc.
If you're expecting this to change, have another hit off that crack pipe.
@Moderator: "Will frameworks such as that proposed by NIST really induce firms to adopt better security controls? And if not, why?" Because you need to pay now and save later. That is psychological view behind. Moreover, as you, Bruce, stated multiple time and I agree with you 100%, government should do it through insurance mandatory for companies. Then, insurance company (you pay premium first) will impose proper security framework through the premium level (like with insurance for other threats when you implement additional security measures. Cost- benefit analysis would be done through risk assessment professionals, not same size fits all approach.
Sadly, we see this same attitude in many aspects of America today: "It wont cost me much after I exclude the actual costs from my version of reality." Once upon a time this would have been called willful blindness, now it is called good business.
Customers are left with nothing when a breach happens. They don't have many options and more than often have to deal with the result of company lacking skills/resources to deal with privacy breach.
Maybe now it's only a matter of cultutre because not 100% relies on tech or customers aren't fully dependant on that, but technology is becoming more and more present and those metrics that are true today won't be in the future.
@all: Is going cash-free really ‘cleaner’ or ‘safer’? [www.bbc.com]
I doubt because only cash payment could provide real privacy of transaction (I am not talking about transactions of illegal items: drugs, firearms, explosive, poison stuff you name it), but each transaction of legal goods/services with no cash payment involved (except gift cards initially purchased by cash)creates 'pixel'/electronic trace in your future electronic profile/constellation(could be used by business - sell and share your data, criminals - analyze you income and habits/weaknesses, lawyers/private investigators - for all kind of unimaginable harm for your reputation, and for LEAs/Intel - domestic and foreign). Then you are at mercy of those business analysts who developed flags analyzing bulk collected data which put your profile aside (at best for further scrutiny). Dale Carson, former FBI agent, police officer and currently criminal defense lawyer in Florida suggested for law abiding citizens: If they can't see, they could not take you." By the way, I'll appreciate if store security camera will capture image for each pressure-cooker purchase with cash and send to local JTTF (at least for now).
Inventory shrinkage is not synonymous with theft. That is only one possible cause of many. Here are just a few examples: a customer drops and breaks a bottle of wine before checkout, food expires or spoils and is discarded, a customer at a restaurant rejects the order, or a product such as gasoline evaporates or is spilled.
Shrinkage is often determined by an audit (“taking inventory”). If the inventory is known to be stolen and the amount is material it is reported as theft.
No and No, infact you are more likely to lose tens if not hundreds of times as much to ID and "card cloning" theft as you will from cash in your pocket.
The reason these countries are going cashless is almost entirely down to the Banks who are "skiming" on card transactions as much as 10% in some places. It's high income low cost for them that hit's the poor harder than the middle class and up. Where as high St banks are real estate, labour and insurance costs, that reduce those fat cat bonuses.
It's also noticeable that the two European countries mentioned have little or no competition in their banking sector, thus monopoly view wins.
I guess that's a couple of more countries on my "no visit" list. Sad because I've enjoyed both in the past.
However, his conclusion is correct. It's just not worth it for a company to increase cyber security, (unless the government, i.e., US taxpayers, pays for it).
It's about the money. It's always about the money. It's the only metric that means anything in most of the capitalist societies. It's like saying, 'Well, the WTC attack was only 0.000925278% of the US population, so that's not significant.'
We need a structured approach to cyber security, free of corporate meddling, and that's not gonna happen. ..........
@vas pup, The insurance idea sounds good, but would likely fail, if not in adoption, then surely in implementation. Corporate actors aren't going to allow mandatory-anything, especially from the Government. To determine responsibility, accurate attribution is necessary. Why should a company pay for cyber-insurance, if Microsoft (for example) provides insecure software? How about problems with CAs, protocols, and 'leakage' from the LE/IC community? Certainly more can be done -within- every business entity to plug security holes. And it can be done without significant investment. .......
Why should a company pay for cyber-insurance, if Microsoft (for example) provides insecure software?
That has been the main problem since I can remember first discussing it back in the early to mid 1990's. The subject came up from @Bruces old employer when I was designing phones. They were looking not just for traceable QA in the hard ware design, --which was fair enough-- but also in the software which was had crafted in ASM for an 8bit Motorola microcontroler.
Importantly there were dark mutterings about product liability thus insurance...
Back then the software was "the magic sauce" and not open to other team members let alone the customers or their software auditors who were still having trouble getting away from the waterfall. About the only formal method available at the time was Z  which whilst it worked was no joy to even think about let alone use. Needless to say we muddled through and the "customer" was happy.
I got dicked with the task of getting additional product certification as I'd previously got a couple of products through Underwriters Laboratories that were used in the hotel and entertainments industries. After investigation it turned out that not even UL had a scheme for phones as nobody had ever asked befor. On aproaching a friend who was an underwriter at Lloyds of London it was suggested my head needed inspecting for defects "as nobody was that mad" as the only software insurance at the time was on airframes and industrial systems and it was mainly bespoke and carried a large premium.
Bear in mind this was still Win3 on MS-DOS days and Micro$hafts reputation was far from shining to put it politely. The actuaries basicaly gave most software the thumbs down and would only look at "state machine" style implementations where every state was identified along with the transitions etc with formal logic given. In otherwords "no interupts alowed etc, which in a multitasking environment is fairly essential...
My experiance in designing safety critical systems for the petrochem industries had indirectly taught me a lot about the blaim game as I was on the periphery of the Piper Alpha disaster and saw the finger pointing and wallpapering of asses of those involved first hand.
The simple fact is it would still be difficult to get product liability insurance for software today if running on a commodity OS. Not because we can not get the required software, behaviour but due to malware and the like. We are only seeing it on the likes of eye wateringly priced medical equipment for use on issolated networks and similar.
Speaking of OS's Qubes had a release today of 3.2,
 Z was all the rage in the early to mid 90's as in 92 the IBM CICS project won a UK Queen's Award for Technological Achievement, and had thus been blessed by the powers that be.
 Needless to say I took the "engineering aproach" of 180 degreeing things so that we looked more like swans than ugly ducklings. Either way, I must have done something right it got the first "Which Number 1" for the resulting product.
@CallMeLateForSupper (”They just tell; they never ask. ... I conclude that they do not want to know.”)
Um, I’d suggest to Call (pun intended :-) it “they do not need to know”. It is not important to convince the suspicious (== hard work), because there are much more unsuspicious (easy to catch) available. There are still billions of “undeveloped” (sorry, sounds bad / derogatory but is meant literally) people available, plus a “forever” growing (repeat: growing) stream of newly born probable customers.
Uncontrolled capitalism (==nature) will always go for the 80% and forget the 20%. Sorry, better try to accept that you (20%) don’t matter.
Re: “none of them could give a reliable metric for how to measure the PR cost ”
Yes, this is the problem: These poor guys don’t know, they need help. Give them a metric, say 100$ per breached dataset (50$ fine, 50$ to the victim). Problem solved. Capitalism needs control.
The insurance idea is OK (= capitalism) but only when there is a risk defined (see above). Companies can’t prevent cyber aggression (e.g. DDoS), the outage (risk) may cost them enough to invest in (fraudulent) protection, but they can’t be punished, they are not causing it. Transaction fraud is a typical risk, they may either pay from their pocket or via insurance, I have no problem until the customer is covered.
Check out the Ponemon Institute's 2016 Cost of a Data Breach Study. They breakdown the costs by industry. A data breach in the financial services or health care industry has a significantly higher cost, both direct and reputational, than does a breach at a public institution.
According to the summary of Ponemon 2015, the average cost per breached customer record was approx $120 .. $150, and the cost to the participating companies was $3.79M.
$3.79M >> $0.2M, so there seems to be quite a range on the cost involved.
As to some of the other comments above, the externalities could be changed if the law were changed to require a fine be paid to the govt. (I'd prefer payment to the affected individuals, say $150 each, but just requiring a payment under law would change the cost considerations involved.)
Some years ago, one of the local telcos here in Calif accidentally/stupidly released subscriber info for unlisted phones for (basically) all subscribers. The statutory fine was $10,000 per breached subscriber record. The fine would have been so huge the telco would have gone into receivership, and required a special mea culpa and waiver from the Calif Public Utilities Commission. (There are also separate fines for the disclosure of law enforcement personnel private info.)
I think this approach could be reasonable, if perhaps one categorized the number of pilfered data fields per user record as the basis for the fine along with the number of user records, along with escalating fines for failure to disclose within certain timeframes.
At a minimum, it would surely boost the insurance costs. This would eventually ripple back into more effective protections.
You worked with Z? Hehe. Nice spec language, nice, consistent logic system, brillant people behind it. But a PITA in terms of notation. As B clearly indicates, Prof. Abrial got some clear hints himself re. notation *g
Small addon to what you so nicely described: Lets not even get started on the cert. bodies/agencies. I had some experience with TÜV and I sometimes didn't know wether I should cry or punch their faces. For a start those people are, to put it diplomatically, often not exactly strong on the engineering side but excessively stubborn (and even enchanted, it seems) on the bureaucracy side. There seems to be a religious tendency to believe that security is achieved by and proportional to the amount of paper filled with a strange mixture of legalese and what formally looks like engineerese.
Well, you survived it without major psychological damage. That's about as much as one can reasonably hope for. *g
I find myself wondering if the people ranting about "capitalism" and "corporations" have given a thought to cybersecurity in government, where security controls and consequences often are laughable, not to mention that governments often are the perpetrators of breaches.
Make that "capitalism", "corporations" and "government" can all have laughable security and can also be perpetrators of breaches. Government uses corporations to provide many of is online functions and security. Even some dubious small businesses men are not above directing crafted malware at the competition, deleting customer records and other petty behavior.
Bruce, it's really hard for me to draw broad conclusions from that paper. One graph that I regret they don't have (and I understand why they don't have it) is how losses compare within an industry based on per firm expenditures on cybersecurity. That is- did the firm that spent 0.001% of its budget get broken into more than the one that spent 0.4$ (and how do they know ;-)?
It would not be meaningfull except in demonstrating attackers choices.
Basicaly there are so many vulnerable systems at any level of security it's "a target rich environment". We talk of "low hanging fruit" getting hit first, but the reality is rather different.
Aside from certain security techniques such as air / energy gapping, a zero day vunerability is going to be a "leveler". That is all connected systems of the same basic type are vulnerable no matter how much effort at any level of cost the admin and accountants have put in...
Thus the systems that get attacked are the subset of all those connected that the attacker decides for other reasons are worth spending time on...
Even when it's not a zero day attack, often systems do not get either patched or mittigated, for long after the attack vector is known and patches made available. Often this is for "business reasons" other than the effort/cost of security, one such is any change will bring the production system down. Therefore it still remains "a target rich environment" and thus the attackers choice not effort/cost...
Therefor the graph would be one of which the attackers thought were "ripe plums" not "green tomatoes".
Hint: The difference between corp and gov is gov’s impunity per se, hence there is no capitalism in gov, this is why it (gov!) doesn’t work.
Imagine OPM lost your data, pays to you the 50$ remedy, to the gov the 50$ fine and take that 100$ from their budget. 50 + 50 + 100 = 200 x 2 (budget for next year) == 400$, this is the minimum it would cost you personally that they lost your data, - let alone the administration costs you’d have to pay anyway. Got the idea how gov works? Don’t touch!
As for reputational damage, Romanosky found that it was almost impossible to quantify. He spoke to many executives and none of them could give a reliable metric for how to measure the PR cost of a public failure of IT security systems.
To some extent, the author presents as shocking, results that are obvious. As with any distribution with large right tails, the mean is going to be much larger than the median. If you select a data set contains many small companies, the losses are going to be much smaller than the results from other surveys, which probably over-sampled large companies.
I notice that the report completely ignores costs to the customers of the companies. That is so typical of company accountability: "Well, yes, our product may kill thousands, but no one will be able to hold us responsible, so our costs are minimal."
There must be costs to the customers, too. Ashley Madison probably cost customers hundreds of millions in damaged reputation, but so long as the company escapes costs, who cares? Not the companies, and clearly not Sasha Romanosky at RAND.
The world would be a much better place if the companies were held accountable for the secondary costs of their products; instead of only being accountable for primary profits.
I subscribe fully to what Roger said: this only means we (before the hackers do) need to make all these breaches more expensive. Like a hack tax. Whenever you get hacked and it's your fault, you have to pay an automatic fine. Whenever you get hacked and don't report it, you pay a huge fine. Not perfect, but still better than the present.