Semrau Security

Fire All The Infosec Workers!

May 21, 2021 Brian Semrau Season 1 Episode 1
Semrau Security
Fire All The Infosec Workers!
Show Notes Transcript Chapter Markers

Discussion on Professor Allen Gwinn's article in The Hill "Our cybersecurity 'industry best practices' keep allowing breaches".

Link to original article: https://thehill.com/opinion/technology/553891-our-cybersecurity-industry-best-practices-keep-allowing-breaches (permalink: https://cite.law/M57E-TSVT)

Article summary ends and analysis starts at 5:11.

Infosec Chicago
Infosec Chicago helps organizations stay secure, no matter how scary the internet is.

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

WELCOME to Semrau Security, I’m your host, Brian Semrau.  For those of you who don’t already know me and are new to the show, I’m a full time digital forensics investigator where my practice focuses on a variety of privacy and security research, and I also run a small information security consulting company called Infosec Chicago.

 

If you haven’t guessed yet by the lack of previous episodes, this is our first episode in the show.  I’ve been playing around with the idea of starting up a podcast for a while, and the other day I came across the perfect topic to get the ball rolling, so I bit the bullet and decided to get started.  Of course, please bear with me while I learn the ins and outs of running a podcast – hopefully there shouldn’t be too many issues, but normally I’m on the behind-the-scenes action rather than in front of the microphone.

 

With today’s episode, we are going to be talking about an article posted in The Hill by Allen Gwinn (I hope I’m saying that correctly).  I’ll be posting a link to the article and a permalink archive in the episode description, so feel free to check that out – I think it’s a good read.  If you have already read and familiarized yourself with the article, I’ll put the timestamp down below where you can skip ahead to my analysis of the arguments.  Mr. Gwinn (or rather professor Gwinn) claims to be a professor of Practice in Information Technology at the Cox School of Business at SMU Dallas.  He claims to have over four decades of experience with systems, networks, data and other cyber resources.  There is a lot to break down in that, and while I haven’t met or talked with Professor Gwinn; based on that introduction and the way the article was written, my suspicion is that he likely has a focus in dev-ops and/or organizational IT leadership more so than information security.  Now, I’m not saying that as a negative, in fact the industry tends to be moving towards dev-sec-ops than just the dev-ops that radically shook up the traditional monolithic IT departments shortly after agile development became the latest buzz-phrase.  I’m simply bringing this up so that we can properly frame what he is talking about in this article, which ostensibly is that of someone with a bit of an outsider’s perspective looking in on the average infosec person working in a traditional IT environment.

 

Just to talk about a quick summary of the article’s contents before I jump into my views on the article, we see a few main topics that Professor Gwinn brings out.  He talks about how the recent Colonial Pipeline attack is the latest in a list of breaches that is constantly growing with a threat-actor managing to get through the infosec “industry best practices” and how reports are coming daily about new incidents hitting major organizations and breaching the confidentiality of data all over the dark web.  And of course guarding over these resources that are getting compromised are ostensibly an army of information security professionals sporting credentials like the CISSP, CISA, and even degrees ranging from undergraduate to doctoral degrees.  That one hit me hard since I’m just about to graduate with a masters degree in cyber forensics and security.  But one of the commonalities that we see with these companies is that they always tout the “industry best practices”.  (Which I think is true, it’s something I’ve noticed too.)

 

                Professor Gwinn goes on to point out that these very impressive people are skilled in the art of tedious IT tasks such as audits, policy changes, defending these changes in 100+ page memos, ostensibly code reviews and making developers lives miserable, etc.  Yet with all of this, what happens when there is a security breach?  Of course everyone except them is to blame.  Maybe the user did something that was just in their nature, or they can blame the vendor, etc.  Professor Gwinn’s thesis is that industry best practices are not best practices, rather they can be dangerous practices.  Such as following least privileges and only allowing network administrators access to the network, the server guys access to the servers, developers access to the development environment, etc.  His argument is that these practices limit the ability of the employees who understand the systems to be able to identify anomalies which are key indicators of threats in progress.

 

                Professor Gwinn sums this all up by stating “The ‘good guys’ are administratively prevented from having a holistic view of systems, networks, applications, workstations and other resources — when this holistic view is exactly what is needed to prevent cyber attacks.

It seems the only person with a truly holistic view of a corporate network and data resources is the hacker. Unfortunately, hackers tend to not comply with corporate information security policy.”

 

                He goes on to recommend that companies release information security employees when they fail with a “one strike and you are out” hiring policy, and to never hire an information security employee who has ever worked for a firm that has had a security incident.  Additionally he basically recommends granting all IT personnel (including developers) access to virtually everything within the infrastructure so they can investigate anomalies.

 

Overall, this was a very interesting article to read.  There is a lot that I agree with, and a lot that I disagree with.  I had typed up a bit of it in a social media response, but I figured that I would go in more depth here on the podcast.  First and foremost I wanted to address the assertion of “Industry Best Practices”.  I find this to be a buzz-phrase more than anything else.  It’s ambiguous and doesn’t really say anything substantiative, which I think is one of the reasons lawyers who are defending these companies and people writing press releases like to use it.  I think if we were to dig into how Colonial Pipeline got hit with this ransomware, we would find that they really were not following what the vast majority of information security professionals would consider industry best practices.  Now of course, I could be completely wrong about that, and maybe they were following what many in the industry would consider best practices; but my money is more along the lines of they were being really stupid in at least one area of their security posture.  If that is the case, then the entire premise of utilizing this attack to make this point falls flat on it’s face.  And frankly, that’s all this was – just one example that Profession Gwinn was utilizing to make his point… so the argument could be made that even if Colonial Pipeline really was following best practices, the vast majority of other breaches that he referenced were not following what would generally be considered best practices.

With that being said, I do actually agree with that main premise.  I have generally said it a bit differently, but my view is that if what the industry was doing right now was working, then we wouldn’t have as many problems as we do today.  In various companies I’ve consulted for, I constantly have senior leadership telling me that they did X, Y, or Z at their previous company, and it worked great.  Frankly, I almost always call BS on that (just a bit nicer of course).  Unless they can give me specifics on what exactly worked better and compared to what (using actual data and not just general feelings), then my view is that it is highly suspect anecdotal data.  The problem with any approach to security is that it works great until it doesn’t, and unless you have data such as from pen-testing or threat hunting that can back up the claims in terms of what works, you rarely have any real data that can say whether you were just lucky or it was actually working.  The kicker is that many times these companies don’t have the logging and visibility they need to even identify a threat-actor in their systems, so for all people quoting anecdotal data know, they may have had major security failures and never even known it.

Instead of taking a traditional approach to security, I always like to challenge the “why” of something.  Why should we have passwords expiring every 30 days (spoiler alert, we shouldn’t).  Why is removing admin rights from endpoints (at least in the traditional sense) the best way to limit users to least privilege?  Why do we need a network edge if the vast majority of our employees are working from home – are we sending enterprise grade firewalls to each and every employee’s house and/or making them ask the local Starbucks to plug it in when they go for coffee and bring their laptop?  Or are we just focusing on outdated security models that need to be updated for situations where we have to assume and plan for the worst?  So in that sense, yes, if we are talking about “industry best practices” in the way that they are typically viewed within traditional work environments, I would have to agree that there are probably better ways to handle things without getting so bogged down in tradition.

 

In regards to the argument of people with impressive resumes getting bogged down in tedious tasks such as paperwork, policies, etc; I think it’s important to recognize that it often isn’t by choice.  It’s a part of the job that most of us would probably rather not be doing.  I always claim that working in security or forensics is 75% paperwork, 10% really repetitive stuff that really should be automated, 5% building automations, 5% making sure those automations work, and 5% really really cool stuff that makes everything else worth it.  I would propose that the vast majority of people in those roles probably hate the paperwork and tedium, but they put up with it because they have to and they have a passion for the work they are in (because it’s rare to see someone working in infosec beyond entry level roles who doesn’t have a passion for it, with the exception of maybe certain executive leadership positions such as CISO which have more of a business focus than anything else).  But to be fair, it is often easy to get lost in that work and lose sight of the bigger picture – especially when compliance requirements are involved.

It is also admittedly a big problem in the industry to play the blame game.  This is frankly the case in all parts of IT, and it isn’t always conscious – but it is very noticeable to others involved.  Instead of taking an issue (security or otherwise) by the reigns and running with it, we take guesses and blame it on other parts of the technology stack that we may not be as familiar with, because it seems like that could be the culprit.  Frankly, I’m guilty of that myself, although I do try to avoid that and take ownership of issues as much as I possibly can.  While the blame game is an issue, having separation of duties is very important, although it will drastically depend on the size of the company.  If we are talking a mom and pop shop with 1 or 2 technicians, then they will obviously have keys to the kingdom; whereas a massive global organization is likely going to have hundreds of people in IT roles, and will have better defined duties to separate out.  I’m not saying that people shouldn’t have the access they need to do their job (something that those of us in security often forget – after all, availability is part of the CIA triad), but a network administrator shouldn’t have access to change code that the app/dev team manages, and the app/dev team shouldn’t be able to update routing rules.  While a holistic view of the network is vitally important, it’s also important to ensure that those with that view understand what they are looking at.  

This isn’t to say that anyone on these teams is incompetent, but even within the infosec industry there is a large number of people who don’t know how to go about gathering actionable data.  Just having access to all of the data isn’t enough, you need to be able to manage it centrally (such as in a SEIM), and know what to look for and filter out the noise – it’s a very specialized and tedious skillset.  Very few networking or server teams have and are willing to utilize that skillset within a threat hunting/SOC scenario.  In fact, fixing visibility issues is almost always one of the biggest projects that I have to fix when I onboard a new client.  While most organizations hope to catch threat-actors within weeks to months after entry, once I get the visibility I need, I always metric our detection in hours, not days.  Now, it’s important to remember that anomaly detection and visibility is a reactive control, not a proactive control.  If you are detecting a breach because of anomaly or visibility, then you already have a breach in most cases… whether or not there was impact yet is another question; but you’ve had the breach and now you have to enact your IR plan.  That’s the main issue with pouring all your eggs into the visibility basket.

 

So, up to this point I have mostly agreed with Professor Gwinn, just a few semantic differences here and there; but here is where I disagree with him the most, and I’m going to read you his statement word for word here. “Implement a ‘one stroke and you are out’ hiring policy for information security employees.  When they fail, do not let it happen twice. Also, never hire an information security employee who has ever worked for a firm that has had a security incident.  Their ‘industry best practices’ did not work for the previous employer, why would they work better for the next victim?  These former employees bring disaster.”  

Now, to be fair, an editor’s note was later added to the bottom of the article which states “The author, Professor Gwinn, states that his column included ‘what is likely to have been the worst wording [he has] ever used in [his] life’ in the 19th and 20th paragraphs, which suggested that he favored the ‘willy-nilly firing of a whole staff of people after a security incident. [His] intent was to hold leadership accountable.’  He now states that businesses and industries should ‘implement a ‘one strike and you are out’ hiring policy for information security leadership whose job it was to secure systems and networks after a major, expensive breach.  Rotate leadership and do not let it happen twice.  Also, weed out and avoid hiring that former information security leader.”

 

I do appreciate his clarification; however, I still take a lot of issue with that statement – not because I dislike someone saying I shouldn’t be in the industry anymore if I make a major mistake… obviously there are instances where I think people need to be blacklisted from the industry.  For instance, CISOs who have no background in infosec and instead have graduate degrees in music composition (yes, that was the case with a major company that had a breach).  That being said, the immediate assumption that information security personnel were responsible for the lapse in controls may often be false.  I can’t tell you how many times I’ve built out a solid security plan for a client, and due to cost or other issues, they only end up implementing half of it.  Is it my fault that they didn’t listen to me and instead prioritized the profits of the company?  You might argue that I could have tried harder to get them to see the value (and by no means do I just roll over when they say no and leave themselves open), but this is a systemic issue across corporate America.  This doesn’t even begin to address the issue of acceptable risk.  There is always some level of accepted risk in information security, there is no way to control for all risk.  You could air-gap a computer by encasing it in a block of cement and leave it locked at the heart of Fort Knox, and yet there would still be some level of risk to that.  And even that isn’t an option in 99.9% of situations.  When in information security, you have two main things you have to juggle in getting acceptance for anything.  First of all, you have to understand that availability is part of the CIA triad (which stands for confidentiality, integrity, and availability – those are the 3 things the industry pretty much universally agrees need to be protected within the information security role).  Secondly, you have to juggle needs of the business.  If your solution is stopping the company from doing business either because you are bankrupting them to enact your plan or employees can no longer access the resources they need to do their job, then you have failed.  If the company is still running without being bankrupt and employees can access resources, then you are accepting risk in some fashion.

The bottom line is that just because a company had a breach does not mean that their infosec employees should be immediately blacklisted – although that is often what happens – usually with the employees with the least amount of power to enact change being the first to get blacklisted.

 

Professor Gwinn goes onto recommend embracing “holistic” approaches to information security, encouraging collaboration with other teams, and going against the grain of “industry best practices”.  I do agree with that – always question the “why” of something and don’t rely on tradition to tell you what to do about a constantly evolving threat-landscape.  What I don’t agree with are his further recommendations to replace information security professionals with “competent technically skilled professionals” [referring to for the rest of the IT department] and to give them access to the tools and access to protect the firm’s resources by granting network engineers access to the server cluster, developers access to the network and workstations, and reverting to the practices that were used before ransomware, breaches, and other information security disasters became commonplace.  That I tend to disagree with quite a bit.  The hiring methods he recommends may make sense for smaller companies, I’ll absolutely give him that one.  But in a medium to large company or enterprise, each of those teams has their own jobs to do and you need security personnel who can watch over everything and whose job it is to watch for anomalies and for TTPs (tools techniques, and procedures used by attackers) and other IOCs (indicators of compromise).  You also need security personnel for more advanced procedures such as pen-testing, threat hunting, and vulnerability analysis.

In regards to Professor Gwinns argument which he sums up as “The security approaches that existed before ‘industry best practices’ really do work.  Ask the next hacker who breaches security.”, I would contend that a malicious threat-actor who breached your network is not the one you should be asking for advice on how to secure your network so it doesn’t happen again.  And the argument that the security approaches that existed before breaches became common place does not work because it was simply security through obscurity.  These old practices were replaced because they do not work.  We need to be forward thinking.  We need to be thinking about user-centric security where instead of blaming the user or simply relying on security awareness training (which isn’t meant to diminish the importance of SAT), we assume that the user is going to do the worst thing possible, and we control for that.  When they don’t, we are presently surprised.  Human nature is to make mistakes.  Instead of finding someone else to blame for the mistake, let’s try to plan ahead for those mistakes.  Recognize that we are going to make mistakes ourselves, and leverage collaboration with a larger team to help fill those gaps, and have monitoring and logging in place which is actively analyzed so that in-progress breaches can be stopped in hours, not days, weeks, months, or years.  

Overall, I really liked professor Gwinn’s article, it ruffled feathers and got discussions rolling.  I think that is needed in the information security community.  I think we need to get away from the rut that a lot of information security personnel are falling into where they are dealing more with the office politics than they are with threat-actors and keeping up to date on the latest threat-research.  If what we are doing now was working, we wouldn’t be in the situation of seeing breaches in the news every day; so instead of playing the blame game to see whose fault it is, we should be looking at the systemic issues in the industry to figure out how to fix those and empower those who are in place to protect the organization against future threats.

 

Thank you to everyone who tuned in and made it this far.  If you liked this podcast, please like and subscribe to it, and let me know what you thought!  I look forward to the next discussion on the information security industry.  Until next time, Stop, Think, Connect.

Introduction
Article Summary
Analysis
Hiring practices disagreement
Summary/looking forward