Cybersecurity Question of the Month: February

Posted by Fuel HQ on Mar 15, 2017 9:00:00 AM

New on Fuel for Thought, we’re introducing a Cybersecurity Question of the Month! We asked Fuel members to weigh in with their opinions in February, and we’re sharing their answers here.

Phil De Meyer, a senior network administrator, and Robert Beckerdite, a senior systems engineer, are members of Fuel’s Editorial Subcommittee under the Community Development Council. Phil and Robert shared their thoughts on the following question: Will there be a large-scale breach in the public cloud?
Many organizations are using or looking to use public cloud services, like Amazon Web Services (AWS) or Microsoft Azure, for new projects. In 2017, cybersecurity experts are predicting a security incident resulting in the loss of data stored in a public cloud as more companies move their business-critical applications there.

Will there be a large-scale breach in the public cloud? Why or why not?

Fuel_CQOTM_February2.png

Robert: To help, let’s define the public cloud as an external hosting company. The question would be two fold.

  1. Will a customer due to their own security practices be compromised in the public cloud?
The short answer is: Yes, customers are compromised all the time, and your infrastructure in your own datacenter or public cloud infrastructure is still only as secure as you implement it.
  1. Will a public hosting provider make a mistake that allows one or all of their customers to be compromised?
I think this will happen but it will likely be a smaller provider focused on commodity services will likely compromise security in a way that allows it to be hacked.

That theme will continue as cloud hosting becomes more of a commodity. Pricing pressure will inevitably lead to customer and hosting provider compromise. This will lead to better industry practices for a time, but it will be a balancing act: Customers will push providers to offer their services cheaper and stockholders will push cloud providers for more profit.

It is really important to understand the shared responsibility model and that you should keep your data encrypted in a way that a hosting provider compromise won’t compromise your company. We all do our best, but even the best make mistakes and get tired. I once read that, statistically, 10 percent of data entry is errors.

Phil: I agree with Robert. It depends on your definition, but I think that it has happened. A very large attack surface made to be accessible to everyone will be a very hard situation to control.

A recent attack like the encrypted MongoDB instance is a perfect example of just this scenario, with the motivation being money. But this is a platform flaw used across many hosted solutions, greatly increasing the amount of data and people affected.

Several major cloud service providers are users of this service. While the configuration is not their fault, these vulnerabilities at the application or service level do happen (e.g., web services, database services and the like) and can have a larger impact.

What happens if that next flaw is found on the OS image for a major service provider? More importantly, what organization without a dedicated security role will have carefully examined the OS and applications they have been provided and deployed for vulnerabilities? In the MongoDB case, it looks like a few thousand thought they were OK.

But, if your definition is a service hosted by a service provider (e.g., Office 365), I think we have yet to see this happen at a large scale. Black Hills Information Security has shown there is reason to be concerned with the authentication mechanisms for Microsoft Exchange. This is the area I am concerned about: managing account federation and usability while being able to properly secure this transaction. In this scenario, the customer is dependent on the provider, as they will be in control of what methods they will support. It would be interesting to see what reporting requirements are for service providers, as that would greatly influence how much we learn about these incidents, too.

It will be very hard to keep this from happening, and whether hosted or on-premise, it will still take the same effort to properly monitor these services against attack. This will make knowing that your responsibilities are properly aligned for your organization, with your customers, and your providers a key factor in successful implementation.

About our contributors:

FUEL_headshot_placeholder.png
Robert Beckerdite, a senior systems engineer and Fuel member, has been using Palo Alto Products since 2011.







phil_de_meyer.jpg
Phil De Meyer, a senior network administrator and Fuel member, has been using Palo Alto Products since 2013.







Check Out Our Next Cybersecurity Question of the Month

Weigh in with your opinions and we'll share your answers in a roundtable format on the Fuel for Thought. It's a easy, quick way for you to share your expertise and make a contribution to the Fuel community!

Topic: Artificial intelligence (AI) and machine learning (ML) usage within cybersecurity is not new; cybersecurity vendors have been leveraging them for threat analysis and big data challenges posed by threat intelligence. On one hand, security solutions powered by unsupervised machine learning may churn out too many false positives and alerts, effectively resulting in alert fatigue and a decrease in sensibility. On the other hand, the amount of data and events generated in corporate networks are beyond the capacity of human experts.

How can cybersecurity professionals leverage AI/ML frameworks to implement predictive security measures? What are some pros and cons of machine learning?

Share Your Thoughts

Tell us your thoughts by contacting Jaclyn Moriarty, our Editorial Coordinator, at jmoriarty@fuelusergroup.org, or share your answer in the March 2017 Cybersecurity Question of the Month thread on the forum.

Topics: Cybersecurity, Hot Topic, Cloud Security, Data Breaches

Posts by Topic

see all

Subscribe to Blog Updates

Recent Posts

Posts by Topic

see all