Five years ago, the conversation around cloud storage was quite different than it is today. In a 2014 SearchStorage guide on “The case for cloud storage,” three out of the four featured articles dealt with backup or cold storage. And the two main topics covered by the guide were cost and the No. 1 user concern, security.

Today, most organizations have moved well beyond using cloud for backup only. Many are moving a range of applications to the cloud, including critical ones. According to a recent Taneja Group survey, more than half of respondents said they will be running at least 40% of their workloads as cloud-based software-as-a-service-delivered applications within the next two or three years. Of course, when discussing the future of cloud storage, costs are still top of mind — more about moving data around than just storing it. Security is much less of a concern these days.

The conversation around cloud storage today focuses on multi-cloud, hybrid cloud, the emergence of block storage in the cloud and even the repatriation of cloud workloads back on-premises. To get a feel for where things are at, we asked six cloud storage experts for their perspective on significant developments in the last five years and where they see the future of cloud storage.

What were the most significant cloud storage developments in the last five years?

Jeff Byrne and Jeff Kato, senior analysts, Taneja Group: The biggest development is that cloud storage has become mainstream over the past five years for a range of use cases, such as global content distribution, backup, disaster recovery, and data analytics. Whereas companies had started adopting cloud storage in 2014 for backup and DR of noncritical applications, five years later, we see organizations routinely using cloud storage to backup and provide DR support for production business apps.

Based on Taneja Group’s research, concerns around security, privacy, and threat of provider lock-in tended to slow or prevent serious public or hybrid cloud storage deployments in 2014. Today, those concerns are much less of an obstacle, as cloud storage offerings have matured and companies have gained experience with and confidence in secondary and primary storage deployments in the cloud.

In 2014, on-premises data centers were the architecture of choice and the cloud was viewed primarily as a dev/test or even an experimental platform. Today, a majority of the companies we speak with are taking a hybrid cloud mindset to their IT architecture plans, which governs how they perceive and plan to adopt cloud storage.

As enterprise apps move to the cloud, cloud storage tends to follow, though most firms are keeping an open mind about where their apps and data will be deployed over the long run. A hybrid cloud architecture helps to translate this mindset into reality, as companies benefit from flexible workload deployments, including the ability to move workloads between on-premises and public cloud over time.

Deepak Mohan, analyst, IDC: Native file storage services, either developed in-house or delivered through partnerships and acquisitions, have grown to become a part of all major public cloud storage portfolios in the last five years. In addition, multiple new tiers of lower-cost cool and cold tier storage options have been introduced, making secondary storage on public cloud storage increasingly attractive for enterprises. Go-to-market catalysts for this include partnerships with major backup software and service providers.

The last five years have also seen a number of major partnerships between traditional enterprise storage leaders and public cloud providers, enabling a hybrid environment within the tools and processes already familiar to enterprises.

George Crump, founder and president, Storage Switzerland: The ever-declining price of cloud storage through the advent of archive tiers from Amazon Glacier, Microsoft Blob and Google’s upcoming offering is one significant development. These prices make cloud storage a potential alternative to tape.

Improvement in production storage performance is another significant development in the last five years. Cloud providers can deploy high-performance storage options faster than typical data centers. Their as-you-need-it business model is also ideal for data centers looking to test the impact of those solutions.

Alastair Cooke, consultant: The widespread adoption of [Amazon] S3 as the de facto standard way for applications to access cloud storage is the most significant development. In particular, the availability of S3-compatible storage systems for on-premises deployment or from other cloud providers enables customer choice. Whether it is bespoke applications using object storage or packaged backup software, the availability of a ubiquitous standard for low-cost scalable storage enables more use cases than the AWS S3 … alone would ever allow.

Mike Matchett, principal consultant, Small World Big Data: I would vote for the evolving use of generic cloud object storage to now power underlying ‘chunk’ storage for global distributed file systems, active archive services and ‘cloud-converged’ data protection tiers with on-premises storage. We have both highly performant object storage and highly scalable object storage.

People are starting to really make use of the idea that all kinds of data can have rich metadata to power policies, intelligent automation and self-optimization. As an inevitable second vote, I might look to the integration and offering of high-performance storage options — e.g., NVMe — in cloud infrastructures.

In terms of the future of cloud storage, what factors will drive the sort of data that stays in the cloud and stays on-premises?

Cooke: Data governance has been a large factor in determining what data can be stored in the cloud. Regulatory bodies are catching up with the state of the public cloud and removing nontechnical barriers to public cloud adoption, which will enable more data to be migrated to the public cloud. At the same time, organizations are gaining a better understanding of the limitations and costs of public cloud storage.

Over time, the use of public cloud storage will be driven more by the data requirements. Data that requires ubiquitous access will land in the public cloud. High durability — backup and archive — data will also land in the public cloud. We will also see more customers repatriating their massive data sets as they realize that it is possible to build a lower-cost massive data store in the way that Dropbox has done, although this will only be for truly massive data sets.

Marc Staimer, founder, Dragon Slayer Consulting: As more mission-critical applications move to the cloud, higher-performing storage is required. I expect more scalable block and file storage to be utilized in the cloud and the cost of that storage to decline rapidly. Data will be colocated with the applications that require that data. So if the application stays on-premises, I expect the active data to stay with the application.

Other reasons data will stay on-premises include data sovereignty, privacy regulations, latency issues between the application and the data and regulations in general.

Matchett: Within five years, most storage location decisions will be driven and executed automatically, if not by policy then by actively optimizing and learning algorithms. There will be less conscious concern about where data is ultimately stored and more focus on delivering data and assuring access at the right time and place for competitive processing needs.

Data protection will always be a major concern, but it will become increasingly built-in to self-optimizing hybrid storage cloud implementations. We see this already in the increasing trend to converge primary and secondary storage, automatically tier into cloud storage layers and implement continuous data protection across multiple clouds.

Crump: The type of data stored in the cloud will vary by organization. It will not be an industry-type decision, [but] more the ‘personality’ of the particular organization. The math still doesn’t work out for the long-term storage of a lot of data in the cloud. Also, egress fees and API call charges continue to be an issue.

Byrne and Kato: As companies increasingly view cloud as a strategic infrastructure platform, they are also increasing their cloud storage focus, use cases and budgets. Secondary and tertiary use cases such as data archiving, backup and DR are still receiving the lion’s share of focus, with firms unable to resist the economics and convenience of archiving and protecting their data in the cloud.

Dev/test storage will continue to grow, as more companies look to develop new apps in the cloud. Storage to support data analytics is also gaining in importance, as companies use in-cloud AI, [machine learning] and other approaches to power their analytics efforts.

What challenges do you see in the future of cloud storage, particularly in the next five years?

Crump: I hate to sound like a broken record, but all the transactional fees, like egress fees, are a problem. The most challenging task many organizations face is understanding their monthly bill from the provider.

Another challenge is how to accurately measure cloud storage consumption and make sure you are downsizing your capacity requirements as soon as you are able. The cloud promises an easy scale-up and scale-down model, [but] very few organizations can scale-down their utilization.

Cooke: A large challenge remains with the shift from a Capex-based on-premises storage deployment to Opex-based public cloud consumption. Budget cycles are still usually tied to an organization’s fiscal year and expect a fixed price for infrastructure. Using public cloud resources frequently results in a variable bill and, for storage, often a bill that increases over time. There are still organizations that cannot adopt public cloud storage because they are tied to a fixed annual budget cycle and cannot accept the risk of a variable monthly bill.

Byrne and Kato: We anticipate that companies will find cross-cloud data movement and migration to be challenging, at least in the short term, as compatibility issues and high egress costs prevent customers from achieving truly seamless, pan-cloud storage deployments. Companies are having a hard time optimizing their storage resources and managing associated costs as they deploy storage across on-premises and one or more public clouds, and this challenge will only get bigger as the industry continues to move towards hybrid and multi-cloud.

Companies will increasingly encounter a need for dynamic, policy-based data placement to achieve performance via local storage and regulatory compliance, and this need is far from being fulfilled today. Customers will also find it challenging to deploy and run their primary and secondary storage across clouds that do not offer a consistent set of data and metadata services.

Originally published on TechTarget, by Stacey Peterson, September 30, 2019

Share