Thursday, November 12, 2020
By Maril Vernon and Charles Buege, Fuel User Group Members, Fuel Editorial Advisory Committee
Probably the most common misunderstanding we have encountered about cloud computing is the varying degrees of consuming cloud services: IaaS (infrastructure as a service), PaaS (platform as a service) and SaaS (software as a service). If you already are or plan on utilizing this functionality, then there are some nuance differences from on-prem that don’t necessarily carry over to cloud components. Additionally, the cloud, while elastic, is not a wide-open space in which servers can endlessly grow without consequences.
In part three of our “Cloud Myths You’re Probably Falling for Right Now” series, we’ll go over some of the most common presumptions made by customers that get them into trouble related to IaaS, PaaS and SaaS.
Myth #1: My database can grow as large as I want it to.
The short answer is, no, your database cannot just grow ‘as large as you want it to’. It’s more accurate to say as large as you allow it to. First off, regardless of your cloud provider, the cloud is not automatically scalable. This is something you need to configure for each database instance. Just because you CAN go ‘maximum size’ on everything, keep in mind that you’re going to pay for that capability as well. There are other aspects that you need to take into account. Here are a couple specific examples:
AWS users: You will need to enable a load balancer on your database (DB) instance, and make sure CloudWatch has the application programming interface (API) enabled to automatically scale and launch (then terminate) new instances based on a predetermined metric of your choice: hour of the day, usage, storage, etc.
Azure SQL DB users: You are still at the core using SQL Server meaning that you still have the 4TB limit that is inherent to the SQL Server. But Azure does offer some other capabilities to get past the 4TB like their Hyperscale option, which permits your database to grow up to 100TB in size.
These are just a few examples. The bottom line is scalability in the cloud involves more than a “launch and go” mentality. You are still going to be limited by the capabilities of the database system that you use even though you are in the cloud.
Myth #2: I can use whatever operating system I want in the cloud. I can bring whatever ISO I want to use into the cloud.
No, you can’t use whatever operating system you want within the cloud. Depending on your cloud provider, you have some limited flexibility of what you can and cannot choose between, but just because you have an ISO file for, say IBM OS/2 Warp 4.0, doesn’t mean you can just upload it and go. On the other hand, if your cloud provider allows you to have bare metal systems with VMware or Hyper-V and allows you to upload ISO images and build VMs right there, then you do get your OS/2 Warp virtual machine.
Now, that being said, with some cloud providers you can have a choice of different ISO images and pre-built images to use in the cloud like AWS. Azure, on the other hand, only allows you to choose operating systems from their marketplace of offerings and you can’t bring ISO images into their system directly. Azure does have some work arounds where you can convert and build your ISOs into compliant VHDs (virtual hard disk/virtual machine disk image) and load them into their systems but not everything will work.
Depending on your cloud provider, some operating systems will not be available to you at all while other providers will have a greater breadth of flexibility as to what operating system you can run. Be sure to check with your cloud provider about support for a given OS before you try to simply deploy it.
Myth #3: My cloud provider will let me choose whatever Internet Protocol (IP) address structure I want.
Yes and no. It does depend on the provider and what capability you're using. Most bigger cloud providers run a software defined network (SDN) and doing whatever you want for an IP addressing scheme will be available to you. Some providers, however, also still run a hardware defined network (HDN) and will assign you specific subnets to meet their IP addressing structures. This is going to be a factor if you mix technologies. Your VMs may be in an SDN section of the cloud, but if your provider allows you to bring hardware into the mix (rare, but it is sometimes available), then you'll be subject to their IP address assignment conventions.
Network address translation (NAT) is also a factor in the cloud. You can often choose the classless inter-domain routing (CIDR) block and internal IP addresses for specific instances, but with regards to public IPv4, you will use the cloud provider’s version of Dynamic Host Configuration Protocol (DHCP) (default) or need to statically assign one (manual). Be sure you understand how your cloud provider allocates these public IP addresses so you don’t release the resource by accident and lose the IP address associated with the DNS entry that you will now have to change.
Myth #4: My cloud provider will let me keep whatever external IP address(es) I want when I first get them.
Yes and no. External static IP addressing, again, is not the default.
If you keep the IP address assigned to a system on your network then yes, in some cases you can keep the same IPs the whole time, but you need to be careful. Deleting the wrong resource or resource group, you may lose “ownership” of that IP and it gets released back into the pool of available addresses and someone else may have it assigned to them.
Myth #5: I should keep all cloud assets in the same data center.
There is no simple answer to this myth. What this comes down to is the fact that you need to put thought into the design of your migration to the cloud before you put one item up there. Before you put one VM, one serverless component, one API module, etc., you need to decide how you are going to want your system to react in the event of a potential data center outage due to natural disaster, system compromise due to a distributed denial of service (DDoS) attack or regional internet outage. Also, you will need to consider cost. Everything in the cloud costs something, so do you start off paying a lot with your system across several data centers or do you start small with your infrastructure designed to grow over time so you can gradually increase the size as you grow? Planning is very important from the start. “If you fail to plan, you are planning to fail.” ― Benjamin Franklin
Otherwise, putting everything in one data center defeats the resiliency and redundancy built into the cloud. Hosting assets in different availability zones and regions, depending on your cloud provider, means you’re literally changing physical data centers located in those areas. Therefore, if you operate mostly in one availability zone or region but keep a backup or host site in another one, if one goes down you do not lose functionality.
You can also set up instances to scale automatically based on predefined metrics across multiple regions and availability zones to reduce latency.
If you found these myths helpful or interesting and would like more information, please contact the Fuel Editor at firstname.lastname@example.org.
Maril Vernon, aka “@SheWhoHacks,” is a penetration tester and PluralSight author with courses published on red team tools and MITRE-driven testing methods. Since entering cybersecurity in 2018, Maril has achieved seven certifications in pentesting and security, accelerating her career in an unprecedented time. Recently, Maril was also a contributing editor of the latest CIS AWS Foundation Benchmark for cloud security.
Charles Buege is the senior DevOps engineer for Temeda, an Industrial IoT company out of Naperville, Illinois. He currently holds a PCNSA certification and is working towards his PCNSE. He also runs an IT-based Meetup group called “The IT Crowd”.
More to Explore
Check out these Fuel blog posts for further reading: