Many businesses are moving towards the cloud for enterprise applications. Where to store important data, however, may be problematic for some. Paul Comfort, Chi Corporation’s lead engineer, answers commonly asked questions to help you determine whether or not the cloud is right for your business.

Q: There seem to be two schools of thought when it comes to the security in the cloud.  One school doesn’t
trust the cloud because now their data is offsite on systems they cannot see or control.  The other school
seems to believe by going to the cloud this will alleviate the concern and threat because the cloud provides a
secure platform for their data. 

What do you hear when talking with organizations and IT professionals considering or leveraging the cloud?

ANSWER:
Both are right and wrong for different reasons. I hear the same dichotomy on a regular basis from other IT administrators. The first thing I always want to clarify when starting a cloud conversation is “What is the nature of the data or service you want to put into the cloud?” For the following answer and example, I’m going to define that data as cloud backups.

You are right to not trust the cloud because the data is under someone else’s control. You are also wrong to trust the cloud will provide a secure platform for that data. The way you mitigate against the cloud trust issue is to encrypt your data with your own key prior to sending it to the cloud.  There are a lot of people who just blindly trust that the cloud will protect their data, that AWS encryption keys prevent people from seeing their data, for example. Those people just upload their unencrypted data, sometimes into an open S3 share and the next thing you know, someone finds it and downloads it for themselves and starts mass distributing it. Or, they just link to your share which dutifully provides a high bandwidth connection to that data which you will get the bill for! The cloud is only as secure as the user makes it and it is not inherently secure. The user should not trust the cloud to keep their data from prying eyes.

On the other hand, you are also wrong to not trust the cloud because the data is under someone else’s control. You are also right to believe that going to the cloud will provide a secure platform for your data. Consider that you pre-encrypted your data and sent it to a cloud provider and that you also set the proper permissions for your object store. We will pick on AWS because they are excellent with their documentation. I quote from their FAQ: “[…] if you store 10,000,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000 years.” That’s a ridiculous amount of security for your data. Unless you are a huge IT shop you will never achieve that kind of secure platform for your own data. Note that we slightly changed the definition of “secure” between these two examples.

Those answers can change dramatically for different types of data and services. For example, I’m a longtime advocate of separating a company’s website/CMS/blog from an employee portal or an ERP system. Just because you can host a PHP-based blog in your datacenter behind your firewall doesn’t mean you necessarily should. Customer facing applications that have no need to also access company files on internal servers make very good candidates for offloading to cloud platforms.

Q: Do you feel the cloud is more secure by nature than hosting your own systems?

ANSWER:
We can come back to the definition of secure here, but that’s beating a dead horse. See the above question. The cloud has the ability to be more secure or less secure than hosting your own systems. It all comes down to the skill and thoroughness of the administers configuring it. However, no one company is going to run an Exchange environment better than Microsoft can run Office 365. WordPress.com is going to run your WordPress blog better and more securely than you can, assuming they meet your needs. Those examples can go on.

If you take a typical System Administrator, who has a few dozen VMs running on an on-premise VMware cluster, and tell him that he has to move his servers to the cloud within the next three months, he’s going to end up with a much less secure environment, and possibly even less functional, but certainly more expensive.

By nature, the cloud is meant to be accessible. When you fire up a server for the first time you immediately connect to it through the Internet. When you build a server on-premise, that server is behind your firewall and possibly even off the network until you are ready to add it to the network. Now, most providers make it difficult to expose your new server to the entire Internet by mistake; however, not all of them provide this ability or make it the default option.

Q: What do you believe are the top three security threats to cloud computing today?

ANSWER:
Speaking strictly of security threats to clouds as opposed to an on-premise environment:

Misconfiguration. This one has bitten time and time again. S3 buckets are found open to the world and contain password databases or other sensitive information. Even with significant exposure over the past couple years, it still is happening. https://www.theregister.co.uk/2018/07/18/kromtech_open_buckets/

Administration errors. This one even bit Amazon. See https://aws.amazon.com/message/41926/. With great power comes great responsibility. Customers can globally change permissions across their entire environment with a single command. What has the desired effect in one area may have the opposite effect in another area by opening a hole.

Undisclosed bugs. Complexity is the enemy of security. These systems are incredibly complex. In the same way that we are still finding holes in operating systems that have been around for over a decade, many cloud providers have equally complex systems that are pounded on daily for vulnerabilities. There is money to be had for selling undisclosed exploits, and nation-states have huge amounts of resources they can apply to find them for their own use. These can be kept from general knowledge for many years. It’s one thing to have a bug in a VMware console that allows a remote exploit – you have that behind your firewall and possibly even segmented on your on-premise network. In the cloud, someone with a bug that allowed full access to your Azure console has free range to do whatever they please to you.

Q: Do you feel those threats differ from the top threats organizations face when hosting their own
environment?

ANSWER:
Organizations face most of the same threats in their own on-premise environments; however, they also have additional tools at their disposal to protect against them. Physical separation is easier to see and verify than virtual separation. VMware and Windows configurations and hardening techniques have been around much longer than cloud interfaces. Cloud interfaces often change rapidly, making configuration and remediation more difficult and the learning curve steeper. In an on-premise environment, you are in control of your upgrade cycle for your console (such as vSphere.)

The biggest mistake that I see most frequently in on-premise installations is the desire for single sign-on above all else. If your System Administrator is using the same account for email that they use as a Domain Administrator, VMware administrator, backup servers, and for single sign-on to their SAN and Firewall, you have a serious potential hole. If that user is compromised, you could easily lose all your data and your backups. Those things are easy to segment in an on-premise environment. It is the way devices come by default. On the other hand, in the cloud, your first account is given root or global administrator privileges. Many customers I have seen continue to use that account for administration across their entire cloud environment rather than delve into making additional accounts with specific roles.

Going back to the definition of security, there are other considerations that organizations don’t have to worry about while in the cloud. Physical failures being a prime example. Some organizations spend an incredible amount of time building DR environments that may never be used or may end up failing during an outage despite all the effort.

 

Q: The levels of threat have become so sophisticated and widespread over the past few years and much of the
conversation always seems to be around data breaches.  How do issues such as weak Access Management,
Insecure API’s, systems and application vulnerabilities, etc. affect security in the cloud compared to hosting
your own environment?

ANSWER:
Generally speaking, and this is not the rule, you have a lot more visibility into your environment when it is hosted on-premise than when it is hosted in the cloud. I have noticed in many organizations that security is left to the cloud provider. Why purchase an “expensive” Palo Alto instance in the cloud when you can just set ACLs on the network addresses of your servers for free? If you are relying on the ACL to protect you and it fails, you have no record of what went wrong.

We are at a stage with cloud solutions where the breaches are so common, obvious, and painful that they eclipse some of the subtler things going on or that are to come. Think of it as the same point in evolution when Code Red and Nimda were the big threat in on-premise hosting. Trolling for open S3 buckets is child’s play compared to what is to come.

A hole in an API or a poor password on an internal network is much less severe than a hole in an API on a public network in public IP space. On your internal network, a threat actor must get past your firewall in order to exploit your systems. In the cloud, if you are only using ACLs, once they are in detection becomes much harder.

You may say, “well just purchase that Palo Alto in the cloud and treat your cloud server environment the same as you would if it were on premise.” I agree with that statement. However, it is so easy (and cheaper) to spin up an environment without those things in the cloud that many people do so. If people put the same effort into securing their cloud solutions as they put into their on-premise solutions, we’d see a lot of the appeal of cloud go away. The myth is that cloud is cheaper, but that is only the case in rare instances.

 

Q: It would appear that moving to the cloud introduces a new layer of risk that organizations need to plan for,
protecting their own internal network as well as now having the need to protect data and access to the cloud
systems too.  What are your thoughts on this?

ANSWER:
Remember that system administrator who is familiar with on-premise solutions who you gave three months to get to the cloud? Even if you give him a year, his job is now twice as hard, and his learning curve is much bigger and different than it was before. This goes back to the common myth that it is cheaper for an organization to be “in the cloud.” It just isn’t true for everyone, or even most organizations. Every cloud migration for a decent sized organization should come with new hires in IT. It should also come with extensive testing and understanding of the cloud pricing they will experience.

 

Q: Most organizations run some sort of hybrid cloud solution.  What are some of the specific security
challenges organizations face when running a hybrid cloud?

ANSWER:
Hybrid cloud is what I recommend for anyone considering moving something to the cloud. There are certain circumstances that lend themselves well to cloud workloads, but many that do not. From a security perspective, hybrid cloud has the potential to open a very nasty hole in your defenses. There are several things that are often done because of hybrid cloud:

Active Directory synchronization. Recently, we have seen several customers with issues from brute force attacks against 365 that have locked out user accounts on their internal networks. Those attacks come from places in the world that none of their users should ever be logging in from. The solution to this requires purchasing extra licensing in Azure for your organization so that you can create rules that define where users can log in from. You also need this same licensing in order to enable detailed auditing of your users, so you can know what was compromised when someone does lose a password.

VPN tunnels. Often you need to transmit data in the clear, or with vulnerable protocols between your cloud and on-premise environment. The solution to this is an encrypted VPN tunnel that connects one to the other. However, it is tempting to create the tunnel but ignore the need for security on the traffic that passes on it. This means that any cloud compromise may have a free pass to exploit your internal network, effectively bypassing your firewall. You have to consider your cloud environment the same way you would consider a DMZ in an on-premise solution. Determine what traffic is legitimate and then create rules to allow only that specific traffic, blocking (and reporting) all else. This protection needs to go both ways. If your on-premise environment is compromised due to a phishing email, you don’t want your cloud servers easily compromised as well.

More relaxed rules on OS level firewalls. This one is a little dubious because at least half the organizations I visit completely disable their OS based firewall at the first moment of trouble. Once disabled to fix a problem, they rarely are re-enabled. With good segmentation on a server VLAN, this might even be an acceptable risk your organization is willing to take. I’m guilty of having done this myself from time to time. When you have servers in the cloud and servers on premise and those servers need to talk to one another, suddenly you are back to modifying OS-level firewall rules to allow additional subnets or servers from an environment as fluid as the cloud can be. The temptation is even greater to just disable the firewall or neuter it by creating sweeping allow rules.

Q: In your experience, what are some of the common mistakes or oversights that companies make today as it
relates to planning for security when moving to the cloud?

ANSWER:
The single most common mistake or oversight is to not plan out accounts with separation of duties. Assigning every user or IAM role with full privileges to everything or excessive privileges ensures that you don’t have to worry about a permission causing a problem, but it also leaves you more open to risk. Consider this example of someone who accidentally posted his root keys to GitHub. https://wptavern.com/ryan-hellyers-aws-nightmare-leaked-access-keys-result-in-a-6000-bill-overnight.  Had he used an IAM key with specific access to only the particular resource it needed, this would not have happened. Amazon, in particular, urges you to remove your root keys once you start separating accounts. Find the best practices for your provider and thoroughly understand it before moving the first thing to the cloud. https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html This is one of the hardest concepts to grasp when trying to move to the cloud, and the temptation to take the easy road and grant too many permissions is very big.

Cloud commitment: There is the forklift approach where you just use a tool to move your VMs as they are into your cloud environment and there is the all-in approach that utilizes cloud features to facilitate and streamline your future operations. The forklift approach is going to be the more expensive approach, and it may not even work very well depending on your use case. If users are accessing large files on your file server that now lives in the cloud you have just increased their wait time for opening and saving, not to mention that you are incurring a cost in bandwidth every time you do. The poor security implications in this method are strong. Your applications and workflow were built for an on-premise solution, and the way you handle communication, in particular, may not take into the account that the communication may now be traveling the public Internet. The all-in approach takes advantage of containers and cloud service-specific features. Things such as “as a service” database engines you can use without spinning up your own windows/Linux server and administering it. Or things that involve retooling your applications to spin up additional resources when they are in demand and spin them down when they are not. Those things will take your programmer’s time to re-engineer your applications, however, they will generally provide a more flexible, manageable, and more secure solution once you have it done correctly. They require attention to those details that are missed by a forklift move. As a precaution, as soon as you have committed to an Amazon-specific service that has no exact equivalent in an on-premise or Azure solution you have increased the amount of resistance your organization will need to overcome to move away from that provider. My recommendation for those testing the water with the cloud is to take the forklift approach but inspect everything and every process like you would with an all-in approach. Don’t move to proprietary as a service offering until you are sure your provider is the one you want to stick with.

Pricing: Generally, the cloud is going to be more expensive than doing it yourself unless you can get away with moving most of your workload in to managed “as a service” type cloud offerings. Do not underestimate the cost of bandwidth. Or the cost of restoring from Glacier. https://medium.com/@karppinen/how-i-ended-up-paying-150-for-a-single-60gb-download-from-amazon-glacier-6cb77b288c3e and http://davidsimic.com/2016/07/18/amazon-s3-pitfalls-how-to-innocously-rackup-a-1797-bill-restoring-1tb-of-data-in-amazon-s3/

How is this related to security? Once your finance department sees a bill completely out of line with what was expected they are going to want it fixed yesterday. If your spend is in the thousands per month you may not even be notified when that jumps to a thousand per day. If this was due to a security breach or just lack of understanding a pricing model, either way, you are probably going to be making sweeping changes without spending the time to plan or fully understand the security breach that caused it. For the above two non-business examples they can just shut down until they figure it out. When your business must stay up, that could be painful.

Shopping around with more than just price in mind: The cheapest solution provider is not the best provider, and of the top three (AWS, Azure, Google Cloud) they each have services that are cheaper than their rivals and more expensive than their rivals. You may pay even more (or significantly less) if you go with a smaller prov,ever, you can get more personalized help from people with experience moving workloads into the cloud. The do-it-yourself approach to cloud can be fun for an administrator, but an organization will benefit long-term from the security and peace of mind that comes from a personalized approach.

Share