Cloud repatriation is the process of bringing back certain workloads and applications from the public cloud back to on-premises data centers. It is a growing trend and according to IDC, 80% of customers report cloud repatriation activities.  But what does this mean for organizations and how should you approach your data architecture?

Cloud Repatriation – What’s Happening and Why?

In the beginning, there was on-premises computing. And, it was good.  But, not good enough. Next came the cloud – the land where anything was possible. Businesses saw promise in the cloud and placed their data and applications into it. With a mix of cloud and on-premises, businesses were free to judge the strengths – and weaknesses – of both. But, this knowledge caused some discomfort. Things weren’t as simple as they used to be.

The landscape of data management has rapidly moved from a siloed data center to a multi-cloud world. Organizations aren’t building a data center for the sake of just storing their data any longer, but rather they have to put data at the center of their decision making. Attributes such as agility, insight, and flexibility are vital to driving business value. Control, ease of use and cost-effectiveness are a key part of enabling that. What many organizations found out is that despite expectations, the public cloud was not always the right answer now that they need to unlock the possibilities of their data.

But what to do? Moving data and workloads for the sake of moving them, in the hopes it will make things easier, is not the right approach.

Strengths: On-Prem vs. Public Cloud

Let’s start by diving into on-premises.  On-premises is a great option if predictability and performance are imperative to driving your workload. It also gives you the option of storing, accessing and transforming data in an environment with full control of how (and where) your data is stored and without sharing resources or infrastructure with other users. In some use cases, such as big data repositories where you have a lot of data that you need to access regularly, on-premises is also a lower-cost option.

On the public cloud end of things, storing data in the cloud makes it easy to access, and implementing a cloud solution is easy. It just takes a few mouse clicks to deploy what used to take months in your on-premises data center. Robust resources and third-party analytics tools and services are abundant, if not unlimited, can be added and removed on the fly and you can easily analyze your storage and compute consumption.

Yet architecting data comes with new challenges in this multi-cloud world. The demands of compliance, data portability, cost, and management have made one thing clear- it’s not an either, or solution. Public cloud and on-premises solutions offer different benefits, and businesses today can’t afford to not leverage both. Hybrid cloud allows you to easily take the best of both worlds – own your data and rent the cloud, as needed.

Yet what workloads go where? And how to approach this cloud repatriation trend – should you pull out some applications from the public cloud back to on-premises?

Cloud Repatriation – Where is This the Right Approach?

Mike McWhorter wrote a great blog on public vs. private cloud and identifying the right home for the right workloads. Beyond specific use cases he outlines where repatriation should be considered, the bottom line is that the environment where your data resides should be indicated by your data strategy.

Your data strategy outlines how data is used as an asset for your business priorities and plans, and the technology strategy and capabilities required to drive this transformation toward a data-driven company.

Regardless of where you are in establishing your data strategy, you can’t afford to silo your data in the short term. Moving petabytes of data from one place to the other is no straight forward task and can be very expensive if you’re extracting it from the cloud. Often the bulk of data will be most portable when it’s stored on-premises. This allows you to leverage services and tools at multiple cloud providers, accessing data at no additional cost thus significantly reducing public cloud storage costs.

Originally published on the Western Digital blog, July 9, 2019, by Erik Ottem.