Managed Security Services Provider - Use Cases

October 30, 2018

apple podcasts.png  google play.png  spotify.png  iheart.png

In our fourth episode of the Managed Security Services podcast, Jerry and Kevin highlight some of their most significant before and after cases of clients using MSSP. From a large healthcare company who needed visibility over their IT assets to a small private equity firm that wanted to better understand the risk posture of their portfolio companies, learn from these real life case studies.


Transcript

Kelly Critelli: Hello and welcome back to EisnerAmper and Cloud Access podcast series where we're talking about managed security services providers (MSSPs). I'm your host, Kelly Critelli, and with us today is Jerry Ravi, EisnerAmper's Partner-in-Charge of Process Risk and Technology Solutions, and Kevin Nikkhoo, CEO of Cloud Access. In case you missed it, our last podcast dealt with the kind of reports a client should expect to receive and what they should do with that information once they obtain it. Jerry and Kevin, welcome and thanks for being here.
Jerry Ravi: Thank you, Kelly.
Kevin Nikkhoo: Thank you.

KC:Can you share with us some MSSP before-and-after client stories?
KN: We had one health care client and we had a call from the CIO who said, “I really don't have any visibility into my security posture. I know I've got firewalls and routers and typical stuff, but I don't know what's going on in our environment.” We started by looking at the vulnerabilities, did an IT asset discovery and performed a pen testing in that environment and, to his great surprise, we found out that there were many vulnerabilities in the environment. For example, they had multiple Microsoft patches that were not applied, they were behind with several patches, which obviously opens up a lot of security holes. He was surprised that he had so many assets that were not on his radar. The IT team had added new asset servers and wireless devices that obviously created additional security holes that he was not aware of. Those assets were important to him. And finally, with the pen testing, we found out what applications were externally focused and what ports were open that should have not been. As a result, we went back and made recommendations to do the patching, close the ports, and add those assets that are important to him. They cleaned up the environment. Shortly after we did all of this, he came back and he said, “I feel so much more comfortable and I can sleep at night because I know we tightened up the environment that we have.”
JR: A very similar story that I have with a client and the private equity world where the two C-level executives were interested to understand the portfolio companies. And they are the smaller businesses that is on the SMB side, and there are about 20-plus companies and didn't have any visibility as to their risk posture in terms of what was important. Did they even have any personally identifiable information on their servers? Before, there was no visibility; after lots of visibility, they were able to see, again, what assets they had, what data that they was stored on those assets which immediately put them into a different risk posture. Each company in our portfolio of companies may have had a different risk posture based on what we were able to identify. And that's the first thing is to identify the assets and understand what you have and then start to protect. And, at that point, you have to start detecting, too. That was the other issue, they had just basic security in place and the logs, unfortunately, would come up with lots of false positives. They didn't necessarily aggregate the data and know exactly what was happening. Going back to before, there was no visibility, and after, it's okay. Now we're going to aggregate the logs in the firewalls, the servers, and some of the devices that are important to us. That's what happened after they were able to bring that to a center console or central console to monitor. That's really key when you look at this type of MSSP model, that you're really not just aggregating the data and saying “I'm done,” that you keep looking at it all the time to make sure that you're identifying the critical events that management needs to respond to and you're helping them respond at the same time. The National Institute of Standards and Technology (NIST) came up with the prevailing framework for cybersecurity, which the regulators reference it as well. It can help identify, protect, detect, respond and recover. Companies didn't necessarily feel as though they were doing all the right things in that space; actually, they didn't even know if they were. At least they were able to look at it holistically across the portfolio of companies and now put a plan in place. Then they started to do other things, like security awareness training, phishing tests, ability scans, and weekly and monthly reports. That's important—the before and after was completely different.
KN: We see this with a lot of small and medium-sized businesses. They put the firewall, router, email filtering and web filtering in place and they feel that they're secure. But security in general and the ability of an MSSP to handle the security is far more than that. It's not enough to have one router and say we've got a couple of security products in there. Even if they did have those in place, how do you monitor it? How do you go and look at it continuously and identify any potential issues? Most companies have to take it to the next level in this new environment where hackers always find ways to get in. There has to be continuous monitoring and continuous adjustment of those security postures by putting in the latest technology and protecting databases from threats. Even more than that is to get to a point where artificial intelligence helps identify those threats over time. There is a continuous battle between security staff and experts and identifying what the hackers do.
JR:That's important. There are a lot of scenarios similar to the before scenario where a company is very static, how they look at security posture and what they do. Many companies, probably only 40% to 50%, are in that SMB space that are probably only doing a security vulnerability assessment on an annual basis versus continuous to your point. The before scenario is we have a static point in the time vulnerability assessment that tells us, okay, we have these potential patches and issues that we need to address. But the problem is that's not continuous. The day after that assessment is done, it's old. What do we do going forward? Things are changing so much, even in an SMB world, that we need to understand what we have. That's the issue with the visibility question that we're talking about the before and after. Even if a company is doing some things, other than just antivirus, which are fairly trivial at this point and only covers 40%, 50% of the way in terms of events. You don't really know what threats are out there, and the hackers are definitely spending a lot more time on the smaller businesses these days than the larger enterprises because large enterprises are going to have more budget, more team, and they're also using a SSPs at a larger scale. But SMB are not doing that, and even mid-sized companies are not. That's where the issue comes in before no visibility, maybe even a little bit of visibility with a security vulnerability assessment, but nothing compared to continuous monitoring. This means seeing it day to day and understanding where we are. That's a big piece of what I would call the before and after, the after picture needs to be continuous versus static.
KN: It has to be a cross-correlation of these events. We're talking about events coming in from different security products and we're talking about continuous monitoring. But in order to understand what's called situational awareness, to understand what's happening to the environment, we have to be able to cross- correlate this information—information coming from different devices, applications, the network, the network traffic, and the protocols that are being used and cross-correlate all of that together in order to understand what's happening to the environment. In other words, we're not looking at the side load information security information by going across different security information coming from different devices, logs that collect and cross-correlate. We have much better visibility as an MSSP, much better visibility to see what's going on, which is not really possible by looking at one set of data coming from one device.
KC:Jerry and Kevin, thanks for your expertise and this great insight. And thank you for listening to the EisnerAmper Cloud Access podcast series. In the next episode, we'll be discussing best practices in managed security services. We hope you'll join us. In the meantime, visit eisneramper.com for more information on this and a host of other topics.


More Podcasts in This Series

* Required