Should CEOs Lose Pay For Cybersecurity Failures?

Given all the cyber­se­cu­ri­ty fail­ures we’ve wit­nessed thus far, could it be any more clear that our legal and gov­er­nance incen­tives and mech­a­nisms for pre­vent­ing and deal­ing with cyber­se­cu­ri­ty attacks are not prop­er­ly aligned? Here’s the lat­est data point: The CEO of Talk­Talk was paid almost £2 mil­lion on top of her base pay of £550,000 in 2015 which includ­ed Talk­Talk’s lat­est cyber attack and result­ing loss of 95,000 sub­scribers.


I came across this news over at the CFO Net­work group on LinkedIn where Conor Marken recent­ly post­ed a link to an arti­cle enti­tled Fine Firms For Cyber Secu­ri­ty Fail­ures. The arti­cle reports that in the UK, mem­bers of par­lia­ment recent­ly con­sid­ered whether com­pa­nies should be fined if they fail to guard against cyber attacks. This comes as they dis­cuss last year’s Talk­Talk hack. Here’s the best line:

The com­mit­tee also rec­om­mend­ed that CEOs’ pay should be linked to effec­tive cyber secu­ri­ty;

Great sen­ti­ment, but who knows if that would real­ly work? Link­ing CEO pay to oth­er per­for­mance fac­tors has­n’t turned out as well as we hoped. Har­vard Busi­ness Review was sour on the whole idea as ear­ly as 1999. And here’s their lat­est take on it: Stop Pay­ing Exec­u­tives for Per­for­mance.

I’m not sure what the big fix is for the fact that many of the same qual­i­ties of the Inter­net that lets Ama­zon dom­i­nate are the same ones that are fuel­ing the rise of online crim­i­nals (bul­lies): Low-cost, glob­al reach, most­ly auto­mat­ed, and large­ly anony­mous. How­ev­er, it is clear that the legal and gov­er­nance incen­tives and mech­a­nisms are not prop­er­ly aligned.

So, what should we do?

4 Reasons Why Cybersecurity Depends On Relationships

Ever won­der why cyber­se­cu­ri­ty is so hard for peo­ple to get right? And, why are cyber­se­cu­ri­ty lead­ers fail­ing to con­vince peo­ple to work more secure­ly? We can learn some great lessons by study­ing the spread of med­ical and oth­er tech­nolo­gies and then apply those lessons to cyber­se­cu­ri­ty tech­nolo­gies we know make a dif­fer­ence, such as pass­word man­agers.

For exam­ple, anes­the­sia (specif­i­cal­ly, chlo­ro­form) was in world-wide use less than a year from its intro­duc­tion in 1846. In con­trast, anti­sep­tics, which were pro­mot­ed in the 1860s, took over twen­ty years to become estab­lished in most oper­at­ing rooms. Why the dif­fer­ence?


Dr. Atul Gawande: “We yearn for fric­tion­less, tech­no­log­i­cal solu­tions. But peo­ple talk­ing to peo­ple is still the way that norms and stan­dards change.”

Here’s why: The spread of all new ideas about what’s good and how things should be is depen­dent on peo­ple talk­ing to each oth­er. Everett Rogers, who is best known for intro­duc­ing the term ear­ly adopter, tells us that “Every change requires effort, and the deci­sion to make that effort is a social process.” In oth­er words, new ideas are spread and adopt­ed pri­mar­i­ly through rela­tion­ships.

I’ve learned this les­son the hard way. Only after wast­ing $30,000 of my bud­get and a good chunk of polit­i­cal cap­i­tal try­ing to imple­ment a new, home­grown cyber­se­cu­ri­ty tool did I real­ize my lack of the right rela­tion­ships had doomed me almost from the start. Based on what I learned from my fail­ure, I take a dras­ti­cal­ly dif­fer­ent approach to intro­duc­ing change these days. My approach is more rela­tion­ship-dri­ven, which is what you should do as well, so that your change efforts will be more suc­cess­ful.

Back to anes­the­sia ver­sus anti­sep­tics. The New York­er pub­lished an arti­cle by Atul Gawande: Slow Ideas. You may remem­ber one of his well-received books, The Check­list Man­i­festo. (Save your­self some time and mon­ey: read the arti­cle upon which the book was based.)

Slow Ideas describes and pro­motes Atul’s Bet­ter Birth project. It’s an exper­i­men­tal approach to reduc­ing the rate of death among moth­ers and babies dur­ing and short­ly after child­birth in poor­er coun­tries. And, along the way, Atul also answers the ques­tion about anes­the­sia ver­sus anti­sep­tics.

It’s a fas­ci­nat­ing sto­ry that’s well worth read­ing on it’s own mer­its. But it also pro­vides keen insight on the strug­gle to cre­ate new norms, which any cyber­se­cu­ri­ty leader look­ing to pro­mote change should appre­ci­ate.

From read­ing Dr. Gawande’s arti­cle, I’ve iden­ti­fied four rea­sons why you should lead all your change efforts by first using your rela­tion­ships:

  1. Tech­nol­o­gy alone won’t get the job done. Dr. Gawande describes see­ing unused incu­ba­tors pushed into dark cor­ners, bro­ken due to lack of spare parts or switched off due to a lack of elec­tric­i­ty. As tech­no­log­i­cal­ly advanced as the units were, drop­ping them off in under­de­vel­oped coun­tries and then mak­ing no arrange­ments for inte­grat­ing them into local life speaks to the lack of rela­tion­ships.
  2. Requests, incen­tives, and penal­ties only work up to a point. Mere­ly request­ing a change will win over a cer­tain per­cent­age of the audi­ence, but prob­a­bly not as many as you want­ed. Study­ing the tax code of any coun­try will reveal incen­tives are hard to get right. Peo­ple have a way of max­i­miz­ing incen­tives for them­selves, often to the detri­ment of the stat­ed goals, and in ways the authors nev­er imag­ined.
  3. Research has shown rela­tion­ships are the most effec­tive way to bring about change. We can intro­duce a new idea to peo­ple. But, peo­ple fol­low the lead of oth­er peo­ple they know and trust when they decide whether to take it up. Everett Rogers wrote: “Every change requires effort, and the deci­sion to make that effort is a social process.”
  4. Real-world expe­ri­ences. In his arti­cle, Dr. Gawande tells a sto­ry about how drug mak­ers per­suade stub­born doc­tors to pre­scribe new med­i­cines: “Evi­dence is not remote­ly enough, how­ev­er strong a case you may have. You must also apply ‘the rule of sev­en touch­es.’ Per­son­al­ly ‘touch’ the doc­tors sev­en times, and they will come to know you; if they know you, they might trust you; and, if they trust you, they will change. Human inter­ac­tion is the key force in over­com­ing resis­tance and speed­ing change.”

I encour­age you to read the arti­cle for your­self. It’s per­sua­sive and very inspi­ra­tional. And, you’ll find out why anes­the­sia got into the oper­at­ing room faster than anti­sep­tics.

Have I con­vinced you that rela­tion­ships are the best method for improv­ing cyber­se­cu­ri­ty? If not, why not? Do you know a bet­ter way?

Two Daily Actions To Contain Data Breach Costs

A sin­gle data breach can cost your com­pa­ny a lot of mon­ey. How much? Based on the Net­Dili­gence 2015 Cyber Claims Study of actu­al insur­ance claims data, we know the aver­age cost of a large com­pa­ny data breach is US$4.8 mil­lion.

Want to min­i­mize the cost? Quick­ly iden­ti­fy the data breach.

How do I know that’s the best way? And, how do you do it quick­ly?

Here’s the first answer: Check out this data in the IBM/Ponemon 2015 Cost of Data Breach Study. This graph from page 22 of their report shows the rela­tion­ship between the mean time to iden­ti­fy a data breach and total aver­age cost:

Screenshot 2016-05-14 08.25.19

That’s a very clear con­nec­tion, don’t you think?

OK, so how can you quick­ly detect a data breach with­out spend­ing a ton of CapEx for a fan­cy intru­sion detec­tion sys­tem and then a ton of OpEx to run the thing?

Here’s how: Have your serv­er admin­is­tra­tion teams run these two dai­ly checks:

  1. Dis­cov­er when­ev­er some­one becomes a priv­i­leged user by ver­i­fy­ing all new accounts that have been added to any admin­is­tra­tor or root groups
  2. Iden­ti­fy data being staged for exfil­tra­tion by notic­ing when large amounts of data sud­den­ly show up in unusu­al places

With both these checks, the large major­i­ty of the work can be auto­mat­ed. The way you do it is use exist­ing serv­er man­age­ment tools to com­pare and high­light the major dif­fer­ences between today’s and yes­ter­day’s snap­shot of (1) all your admin/root group mem­bers and (2) the per­cent­age of free serv­er disk space.

The man­u­al work is track­ing down why those changes hap­pened and mak­ing sure it’s a legit busi­ness rea­son. This will take some sleuthing at first to know who to call and what con­sti­tutes nor­mal changes. But with­in a month you will set­tle down into a pro­duc­tive rou­tine.

What oth­er sim­ple tech­niques have you used to detect data breach­es?

How Much Should You Pay For Cyber Insurance?

The cyber insur­ance mar­ket is boom­ing. Seems like every­one wants to get a pol­i­cy to trans­fer risk. And why not? Insur­ance is a use­ful risk man­age­ment tool in so many oth­er sit­u­a­tions: Gen­er­al lia­bil­i­ty, prop­er­ty dam­age, errors and omis­sions, etc. The ques­tion on every­one’s mind is: How much for a cyber pol­i­cy?


How big is the mar­ket get­ting? Accord­ing to David Brad­ford, co-founder and chief strat­e­gy offi­cer at Advisens, an advi­sor to the insur­ance indus­try:

The mar­ket for cyber insur­ance in 2015 was $2.5 bil­lion. For 2020 it’s esti­mat­ed any­where between $5 bil­lion and $10 bil­lion. By com­par­i­son, work­ers’ com­pen­sa­tion insur­ance is a $55 bil­lion mar­ket.

Brad­ford says this is rough­ly what you can expect to pay for a year of cov­er­age:

  • For com­pa­nies with less than $500 mil­lion in rev­enue, poli­cies with lim­its of between $1 mil­lion and $5 mil­lion cost between $2,000 and $5,000.
  • For com­pa­nies with more than $500 mil­lion in rev­enue, for a pol­i­cy with lim­its of $5 mil­lion to $20 mil­lion, pre­mi­ums will range from $100,000 to $500,000.

There’s a big caveat, though: Even though about 60 com­pa­nies are writ­ing cyber insur­ance poli­cies today, in my expe­ri­ence many are mak­ing it up as they go along. Terms, con­di­tions, cov­er­ages, exclu­sions, and risk assess­ments are all over the place. Unlike a com­mer­cial fire pol­i­cy, there’s almost no stan­dard­iza­tion.

Insur­ance com­pa­nies aren’t even in agree­ment about what fac­tors indi­cate a decreased risk of pol­i­cy hold­er fil­ing a claim. And that can trans­late to high­er (or low­er) pre­mi­ums than required to cov­er the risks. At this point, it’s rea­son­able to won­der if your claim will be paid at all. The lit­i­ga­tion over cyber cov­er­ages is just get­ting start­ed.

If you want to go for­ward with buy­ing a pol­i­cy, get your­self a reli­able bro­ker and get ready to do some seri­ous com­par­a­tive shop­ping. Buy­er beware!

77 Percent of Businesses Have No Cyberattack Response Capability

Did you know that lean­ing into your cyber risks can be a source of com­pet­i­tive advan­tage? Here’s a stun­ning data point that makes my case.

The NTT Group (Japan­ese AT&T) recent­ly released their 4th annu­al Glob­al Threat Intel­li­gence Report (GTIR). Sim­i­lar to the recent­ly released Ver­i­zon Data Breach Inci­dent Report, the NTT report…

…ana­lyzes attacks, threats and trends from the pre­vi­ous year, pulling infor­ma­tion from 24 secu­ri­ty oper­a­tions cen­ters, sev­en R&D cen­ters, 3.5 tril­lion logs, 6.2 bil­lion attacks and near­ly 8,000 secu­ri­ty clients across six con­ti­nents.

Here’s one of their most strik­ing find­ings for 2015:

Trend data over the last 3 years illus­trates on aver­age only 23 per­cent of orga­ni­za­tions are capa­ble of respond­ing effec­tive­ly to a cyber inci­dent. 77 per­cent have no capa­bil­i­ty to respond to crit­i­cal inci­dents and often pur­chase inci­dent response sup­port ser­vices after an inci­dent has occurred.

You can find this sup­port­ing chart on page 47:

Screenshot 2016-05-02 07.50.21

My ini­tial reac­tion is that exec­u­tives are plan­ning for cyber attacks as they do for 100-year floods: We’ll deal with it, if it ever hap­pens.

Giv­en the fre­quen­cy and sever­i­ty of the attacks doc­u­ment­ed in the rest of the report, and all over the news media, that’s not lined up at all with the real­i­ty of today’s cyber risks!

But back to the oppor­tu­ni­ty for com­pet­i­tive advan­tage: What if your fiercest com­peti­tor was a mem­ber of the 77% and was cyber-attacked? They could expect to bleed cash and be dis­tract­ed for months. Now what if you were one of the 23% able to effec­tive­ly respond to a major cyber­se­cu­ri­ty inci­dent? How would that boost dig­i­tal trust with your cus­tomers and part­ners? How much rep­u­ta­tion would you save by hav­ing your experts get out in front of the sto­ry? And, how much more quick­ly could you get back to work­ing on what’s most impor­tant to your busi­ness?

By the way, if you want a glimpse at data breach response done very well, check out this cri­tique of Anthem Blue­Cross BlueShield­’s 2015 data breach. If you want to see a poor­ly done exam­ple, here’s a cri­tique of Talk­Talk’s slow, awk­ward response.

Which one would you rather be?

Why You Should Pay Ransom For Your Data

A few weeks ago I talked about why pay­ing ran­som to get your data or com­put­ers back online was a bad idea: Like any bul­ly, once they suc­ceed in get­ting your mon­ey it will embold­en them to demand more and from more peo­ple.

But it turns out that at least one ven­er­a­ble Amer­i­can insti­tu­tion thinks you should pay: The Fed­er­al Bureau of Inves­ti­ga­tion.


Yep, the FBI says you should pay up. They are, in fact, on record (Octo­ber 22, 2015) telling peo­ple to pay the ran­som:

Joseph Bonavolon­ta, the Assis­tant Spe­cial Agent who over­sees the FBI’s CYBER and Coun­ter­in­tel­li­gence Pro­gram in Boston, spoke at the 2015 Cyber Secu­ri­ty Sum­mit and advised that com­pa­nies infect­ed with ran­somware may want to give in to the criminal’s demands.

After my post went online, I heard from a col­league who told me:

I was pre­sent­ing in an Infra­gard brief­ing at the FBI office, and they basi­cal­ly told every­one there was noth­ing they could do if it hap­pened, that they were pret­ty much on their own. There is also no telling what the ran­somware left behind for anoth­er go-round, or con­tin­ued sur­veil­lance while it held the sys­tem cap­tive. Mere­ly breath­ing a sigh of relief and think­ing you are in the clear a real­ly bad idea.

Although it’s still the right thing to do, I know that not pay­ing the ran­som is dif­fi­cult, even if you have good back­ups. It’s not as fast as just pay­ing because it takes a lot of time to restore and you’ll still lose some data. And, whether you pay or not, there’s a good chance you will get hit again with a new strain of ran­somware, so why fight it?

I won­der what the dom­i­nant type of back­lash will be as more US cit­i­zens wake up to the fact that law enforce­ment can’t help them pre­vent or recov­er from these new cyber crimes? Anger? Fear? Vig­i­lan­tism?

What do you think is most like­ly?

What To Do About Reputable Websites Delivering Malware?

Did you know that rep­utable web­sites (like Forbes, The New York Times, and oth­ershave been caught try­ing to install mal­ware on their vis­i­tors com­put­ers and smart­phones?  This isn’t new, but it’s a trend that’s been get­ting worse when it should be get­ting bet­ter.

NYT tweet

These rep­utable web­sites are not delib­er­ate­ly try­ing to hijack your com­put­ers, of course. It’s the net­works that serve up the ads that have been com­pro­mised. Known as malver­tis­ing (mali­cious adver­tis­ing), it is, accord­ing to cyber­se­cu­ri­ty expert Lenny Zeltser:

…attrac­tive to attack­ers because they can be eas­i­ly spread across a large num­ber of legit­i­mate web­sites with­out direct­ly com­pro­mis­ing those web­sites.

This type of attack relies on Adobe Flash and Microsoft Sil­verlight con­fig­ured in your brows­er to auto play the ads. This has been going on since at least 2007 but it got much worse in 2015 and con­tin­ues to get big­ger. And, it appears to be cross­ing over to mobile devices.

The recent arti­cle in The Reg­is­ter did­n’t say it, but I will: Why should­n’t orga­ni­za­tions of all sizes install an ad-block­er (I sug­gest uBlock Ori­gin) across all desk­tops and mobile devices? At least until this ad-net­work mess gets cleaned up.

Is there some oth­er, eas­i­er thing we should be doing?

Boeing Supplier Lost $54 Million to CEO Fraud

Did you know that Busi­ness Email Com­pro­mise (BEC), also known as CEO Fraud, is still a threat? And, it’s not just the stolen mon­ey that caus­es exec­u­tive headaches. It can dam­age your stock price and rep­u­ta­tion with major cus­tomers. And, in the case of FACC, it cost the CFO, Min­fen Gu, her job.


Here’s what Com­put­er Week­ly said about the fraud, announced on Jan­u­ary 19th:

A $54m cyber fraud against Austria’s FACC has sent the air­craft supplier’s share price reel­ing. The company’s share price fell near­ly 17% in response to news of the company’s loss, which is one of the great­est loss­es to date caused by cyber fraud, accord­ing to Bloomberg. The loss report­ed by the sup­pli­er to com­pa­nies such as Boe­ing and Air­bus is way above the aver­age cost of the worst breach­es in the UK of between$1.9m and $4.4m, report­ed by Price­wa­ter­house­C­oop­ers (PWC) in 2015.

So, how do you pre­vent these attacks from suc­ceed­ing?

In my expe­ri­ence, most com­pa­nies are over spend­ing on tech­nol­o­gy to pre­vent data and mon­ey theft while down­play­ing the peo­ple, process, and man­age­ment aspects. As with FACC, the recent theft of W‑2 infor­ma­tion from Mon­eytree was suc­cess­ful most­ly because of weak inter­nal process­es and poor­ly trained peo­ple. And there’s a lot you can do in these areas for lit­tle or no added expense.

Train­ing peo­ple to detect and resist attempts to trick them into send­ing mon­ey (or sen­si­tive data) to crim­i­nals is a top action every­one should be tak­ing right now. A good approach is to com­bine a strong inter­nal com­mu­ni­ca­tions cam­paign in con­junc­tion with a soft­ware-as-a-ser­vice anti-phish­ing test­ing ser­vice, such as PhishMe or one of its com­peti­tors. Expect to pay about $20 per user, per year.

On that note, orga­ni­za­tions need to make sure their man­age­ment team ful­ly sup­ports their cyber­se­cu­ri­ty pro­gram, espe­cial­ly first line super­vi­sors. Why? When peo­ple hear about their respon­si­bil­i­ty to pre­vent cyber crime, their first ques­tion will be “is this for real?” and then they will won­der “how will this affect me?” Their super­vi­sor will either encour­age peo­ple to join the pro­gram, or kill it, depend­ing on how they answer.

Final­ly, peo­ple have to feel safe to respect­ful­ly chal­lenge any sus­pi­cious requests. Oth­er­wise, they will be stuck between the fear of being fired for not imme­di­ate­ly com­ply­ing with the request and the fear of mak­ing a big mis­take.

What else would you do to pro­tect your orga­ni­za­tion from CEO Fraud?

HIPAA Settlement Costs At Least $163 Per Record

Here’s an announce­ment that should have any HIPAA-cov­ered orga­ni­za­tion sit­ting straight up! Espe­cial­ly busi­ness asso­ciates because this is going to affect their agree­ments with HIPAA cov­ered enti­ties.

From the Office of Civ­il Rights (OCR): $1.55 mil­lion set­tle­ment under­scores the impor­tance of exe­cut­ing HIPAA busi­ness asso­ciate agree­ments.


Here’s their abstract of the set­tle­ment:

North Memo­r­i­al Health Care has agreed to set­tle charges that it poten­tial­ly vio­lat­ed the Health Insur­ance Porta­bil­i­ty and Account­abil­i­ty Act of 1996 (HIPAA) Pri­va­cy and Secu­ri­ty Rules by fail­ing to imple­ment a busi­ness asso­ciate agree­ment with a major con­trac­tor and fail­ing to insti­tute an orga­ni­za­tion-wide risk analy­sis to address risks and vul­ner­a­bil­i­ties to its patient infor­ma­tion. North Memo­r­i­al is a com­pre­hen­sive, not-for-prof­it health care sys­tem in Min­neso­ta that serves the Twin Cities and sur­round­ing com­mu­ni­ties. The set­tle­ment includes a mon­e­tary pay­ment of $1,550,000 and a robust cor­rec­tive action plan.

It all start­ed in 2011 with a stolen lap­top from an employ­ee of North Memo­ri­al’s busi­ness asso­ciate, Accre­tive Health. The lap­top was in the employ­ee’s locked car with ~9,500 unen­crypt­ed ePHI records on it.

North Memo­r­i­al is required to com­plete the fol­low­ing cor­rec­tive actions:

  • Devel­op Poli­cies and Pro­ce­dures Relat­ed to Busi­ness Asso­ciate Rela­tion­ships (90 days from set­tle­ment)
  • Mod­i­fy Exist­ing Risk Analy­sis Process (180 days from set­tle­ment)
  • Devel­op and Imple­ment a Risk Man­age­ment Plan
  • Train­ing (60 days from HHS approval of North Memo­ri­al’s new poli­cies)
  • Prompt­ly File Reportable Events and Annu­al Reports

Con­sid­er­ing only the fine, North Memo­r­i­al set­tled with OCR at just over $163 per record. It’s a chill­ing way for exec­u­tives to learn a les­son about where cyber­se­cu­ri­ty should fit in their pri­or­i­ties.

Here’s anoth­er angle on this sto­ry: The var­i­ous Pomem­on “costs of a data breach” stud­ies sets the amount at about $145 per record. The fine alone exceeds that bench­mark. Once all the extra costs are tal­lied, I won­der what the final cost per record will be?

Can You Steal $1 Billion Using Malware?

Based on recent reports out of Bangladesh, it looks like mal­ware can steal at least $80 mil­lion. Appar­ent­ly, a mere typo by the thieves pre­vent­ed the loss of much more. Some peo­ple find it hard to believe that such large sums can be stolen with­out any overt insid­er assis­tance.


Source: Kasper­sky Labs

After read­ing this sto­ry, a friend said to me “This is crazy. What per­cent­age would you say start off as ‘inside’ jobs? To me a major­i­ty start from with­in.”

A 2013 report by Clear­swift said

…more than half of all secu­ri­ty inci­dents (58%) can be attrib­uted to the wider insid­er fam­i­ly: employ­ees (33%), ex-employ­ees (7%) and cus­tomers, part­ners or sup­pli­ers (18%).

So, my friend is right.

But to sug­gest that mal­ware alone could­n’t help a gang steal $1 bil­lion is old think­ing. Stuxnet and Car­banak are two high-pro­file exam­ples of doing great dam­age from a dis­tance. And both of them start­ed by using social engi­neer­ing to pierce the human fire­wall.

Some peo­ple say the human fire­wall is irrepara­bly bro­ken. While I would­n’t exclu­sive­ly rely on it, there’s no need to give up on your peo­ple com­plete­ly. A good blend of coun­ter­mea­sures across the peo­ple, process, tech­nol­o­gy, and man­age­ment dimen­sions is the best approach. And using the NIST Cyber­se­cu­ri­ty Frame­work (CSF) to orga­nize your­self makes great sense.

Not sure where to begin? Drop me a note and I’ll be glad to point you in the right direc­tion.