Thursday, January 23, 2020
Featuring the Unit 42 Threat Research Team, Cloud-Focused Division
As cloud platforms gain wider adoption, cloud service providers (CSPs) are keeping up through constant innovation. With that innovation, however, can come increased complexity, as well as a lack of understanding on the customer’s end for how to put a shared responsibility model into practice.
With this in mind, Unit 42’s cloud-focused division developed a research report, “Cloudy With a Chance of Entropy,” designed to empower businesses with knowledge and best practices to fulfill shared responsibility in the cloud and protect their cloud against pervasive threats.
Fuel editorial committee members reviewed the report and asked the Unit 42 cloud research team their burning questions to learn more.
If a customer could only do one thing to remediate their cloud accounts, what should it be?
The biggest thing a customer could do to decrease their risk would be to narrow their attack landscape. Making their cloud infrastructure less vulnerable to an attack can be achieved by ensuring the cloud systems, containers or serverless code they deploy is free from known vulnerabilities prior to its deployment. Ensuring that each workload is using the latest software version will greatly reduce the attack landscape and make it more challenging for attackers to compromise the infrastructure. The most effective way to consistently do this is by integrating security into the CI/CD process. This way every build is scanned prior to advancing down the pipeline.
What will it take to get organizations to adopt/employ/utilize modern SSL? What will be the turning point that gets it to happen?
As our recent Cloud Threat Report highlighted, 61% of organizations are using TLS v1.1, or SSL 3.1. This particular encryption version was decommissioned in 2008 due to an exposed vulnerability within its encryption. The current industry standards are both TLS v1.2 and v1.3, which are proving to be more resistant to compromise. In the same way, an organization should use the latest versions of the software or services they deploy, they should also treat their encryption algorithms in the same manner. By using the latest algorithms, a customer will again be reducing their attack landscape and make it harder for attackers to find footholds into their infrastructure.
Do you think that cloud service providers (CSPs) have a responsibility to monitor for default configurations and/or insecure configurations?
CSPs already provide popup windows and warning prompts to a customer if they are about to deploy a system with extremely weak configurations, but these can easily be skipped or not read properly. Organizations also tend to use multiple CSPs, so CSP-provided warnings and prompts often get lost in the complexity of multiple cloud consoles. There is a fine line between the shared responsibility of the CSP and the customer for the state of a particular cloud instance. CSPs provide a service, or in this case, infrastructure and services, and the customers have the right to use it in their own way. The best way to avoid security gaps due to the shared responsibility model is for customers to clearly map out what they are responsible for. This clarity upfront usually results in a much stronger security position for the client.
When looking at cloud providers like Microsoft Azure, what offerings does Palo Alto Networks have for more “diaphanous” offerings like Functions, Web Apps, etc.? (We are referring to solutions that don't have the ability to route traffic to a specific gateway like a virtual machine would allow.)
First off, good word!
This product question has two answers. First, Palo Alto Networks has a solution that ties directly to the client’s API interface with their cloud provider, including Azure. This connection allows Prisma Cloud to monitor the actions a customer's cloud infrastructure performs as if they were the customer. We are able to see instance configurations and serverless functions, the network connections those services send and receive, who created them, and if they conform to stated compliance policies. This type of functionality will directly integrate with Azure functions, as well as Lambda and GCP functions.
Second, Prisma Cloud is able to place agents within a cloud environment to provide monitoring of instances, containers, functions and services running in those environments. This allows Prisma Cloud to detect malicious actions and deliver visibility into the customer's cloud environment. The second solution can be architected to overcome the routing challenge you proposed. But the first solution does not require a network route to an isolated instance, as the API interface simply pulls the information directly.
How can a company for the first time going to the cloud handle these issues without a full-time team/person overseeing this? What options are available, or can they get educated about the updates?
This is a very key question for our industry today. Education is certainly the most important component. A single competent engineer who understands and excels in system administration, with a little training, will perform well in the cloud. We have seen many small security teams scale to meet cloud demands by not relying upon dozens of individual point products. The most agile teams typically take a platform approach when it comes to security and this decrease in complexity often makes up for the smaller team size.
To support that person or team, and to support the organization as a whole, it is vital that a Change Control Board (CCB) is directly involved in the cloud migration process. A CCB may give the perception of slowing the cloud migration down, but as our Cloud Threat Report highlights, default configurations and unpatched services are the number one security risk facing organizations in the cloud.
To add one more layer to this concept, employing Shift Left Security into the migration is a critical component. (Shift Left means building security into the entire development and migration lifecycle.) The earlier developers are given feedback, the more secure the final product will be. Each organization should have the mantra, "It must be scanned as early and often as possible." This will greatly assist any organization moving to the cloud.
What was the source of the numbers of the information in the report to justify the report?
The data that we gathered for the network connection analysis came from Palo Alto Networks proprietary sources. We did use third party network scanning tools like Shodan and Censys, but we also performed a number of network scans from our own infrastructure to ensure we could corroborate our findings.
More to Explore
Check out these Fuel blog posts for further reading: