Sean Bouani, Partner, Zinnov; Naureen Fatima, Project Lead, Zinnov; Simran Arora, Consultant, Zinnov
The US economy has been contracting for three consecutive quarters in a row, the Federal Reserve is pushing to increase interest rates, and inflation continues to rise, with economists and analysts forecasting a rather alarming recession in the near future. This means that companies are going to experience major budgetary constraints in OpEx, forcing them to deploy various cost cutting strategies.
The last decade witnessed an exponential rise in demand for Cloud Computing with businesses flocking to host their applications and data in the Cloud environments. However, now we are witnessing pushback on Cloud migration due to the impending recession, and companies’ concerns over rising prices on the Cloud platforms’ pay-as-you-go models. Industry analysts estimate that companies end up spending approximately USD 60 Bn in paying for annual public Cloud resources that they barely utilize or even need given the scale of their operations.
People, (and companies) learn to make do or adapt to older, easier methods of doing things once they lose the means to keep up with their current way of life.
While enterprises have let Cloud costs spiral out of control for years, they are now rushing to correct course. All of this has led to the birth of a new phenomenon within the realm of Cloud Computing – Cloud Repatriation.
The Buzz in the World of Cloud Deployments
Also known as ‘reverse Cloud migration’, the Cloud Repatriation process involves transferring an organization’s applications, data, and workloads from a public Cloud to their local infrastructure. Some organizations tend to adopt either a hybrid or a private Cloud model, whereas some others revert to using the conventional on-site data centers.
Dropbox made a compelling case for repatriation in 2015, when it began transferring its workloads in phases to a self-managed data center – resulting in approximately USD 75 Mn in OpEX cost savings. Dropbox called this initiative the “Infrastructure Optimization” project which helped the company curb additional costs of revenue and in turn increase gross margins.
Today, many companies are following suit. This is widely because most of them had adopted the public Cloud without conducting thorough evaluations of their storage requirements for the foreseeable future. This lack of foresight resulted in costs that may begin to endanger their profits going forward.
The Public Cloud and Cost Overruns
Any enterprise that opts for the public Cloud storage ends up paying a monthly fee for four major offerings: server instances, storage volumes, per-use services, and certain unique Cloud components. In addition to this, public Cloud providers are at liberty to change the pricing structures of their offerings at any point, making it impossible for any company to prepare a long-term budget with realistic projections.
According to industry experts, more than a third of enterprises end up exceeding their budgets by 30%-40% when accommodating for the Cloud provider’s monthly subscription fee. Furthermore, these enterprises tend to keep paying for the services that aren’t being optimally utilized but are anyway retained. This pattern has led to frequent cost overruns, which can turn out to be disastrous especially in a recessionary environment. This is where Cloud repatriation starts making economic sense.
Analyzing the Causes
In addition to guarding against soaring OpEx costs, Cloud repatriation eases concern over security breaches, global outages, and governance issues. Here are some reasons why enterprises want to consider repatriation as an option:
Data Security: Say a Company A exists, and thousands of other companies occupy the public Cloud alongside it. This could sometimes lead to major errors, exploitation, and data breaches. Other than the SLA that A’s Cloud platform delivers at the beginning of its engagement, there is no concrete way of knowing if there are ample control measures in place to protect the workload that A is hosting. On the other hand, an on-prem infrastructure is self-managed and has a stronger control mechanism that works like well-oiled machinery in preventing data security failures.
Performance Management: Being completely dependent on a public Cloud platform that has way too many users at the same time makes a company vulnerable to congestion, latency or sometimes even a complete outage. This can lead to millions of dollars’ worth of losses and bring the entire operation to a standstill. However, a business’ local data center is equipped with a robust backup and business continuity plan (BCP), which ensures that its applications and workloads are always available.
United States: Heading the Phenomenon
The public Cloud consumption drastically increased with the onset of the pandemic in 2020. The ease of access to data allowed almost everyone to resume business during the lockdowns without any major bottlenecks. However, the economic meltdown that came as a by-product of the COVID-19 pandemic has also led to a lack of business prospects for several organizations worldwide, thus forcing them to re-evaluate their budgets.
According to a statement by the President of Chicago Federal Reserve, Charles Evans, the Fed’s immediate move to hike interest rates was to control inflation and bring-in price stability. This year, the US is bearing witness to the highest levels of cost-of-living in nearly 40 years. Like every other economic slump, this one has seen its own share of mass layoffs which usually is the first step for all enterprises to cut costs.
Trying to avoid layoffs, 2022 saw corporates looking deeper to find the source of increased cost of operations – their data management solutions. Was it worth pouring so many dollars for services that were seldom utilized? The Dropbox example signaled many major companies across the globe to reconsider their move to the public Cloud.
Analysts agree that almost 70% – 80% of US firms have already indulged in reverse migrating their data from the public Cloud, in this year alone. This figure pans across industries and is more likely to rise in the fourth quarter. As quoted in a Forbes article, Dell’s recent survey points to the fact that businesses in the US are reaping significant benefits from their repatriation efforts. Over 80% of the firms experienced greater efficiency in performance, better cost management, and most importantly gained the ability to exercise control over their data.
Clearly, it is the United States which is leading the pack in Cloud Repatriation. But as recessions are known to have a domino effect, repatriations are being considered across the globe to optimize data management.
As we witness the global economic slowdown, it will be interesting to see how “unClouding” or “de-Clouding” associated with Cloud repatriation unfolds. Going on, given the focus on economic efficiency, Hybrid Cloud will garner more attention. Not every workload will or should live in the Cloud. Hybrid is sometimes the silver bullet that organizations are looking for, and the hyperscalers are realizing it. They’re positioning to bring their stacks on-prem and to the edge.
For hyperscalers its imperative to establish trust and credibility with their customers, especially the large enterprises. Once the Cloud bubble bursts, customers will think twice before trusting Cloud providers/hyperscalers.
So how can hyperscalers mitigate their risk and work closely with clients to optimize their Cloud spends without comprising on their relationship with clients?
Here’s what can be done.
- Mapping business requirements to Cloud strategy
Before locking the Cloud migration deal, its critical for hyperscalers to assist their clients in mapping business requirements to Cloud strategy. Public Cloud and on-prem services both offer advantages based on where they are used. Public Cloud is well suited for businesses with highly elastic compute requirements. For example, for an e-Commerce site that experiences sudden surge of traffic during certain times of the year, public Cloud is the perfect option. However, if the business itself is infrastructure heavy, it makes more sense to stick with on-prem. It is imperative for companies to conduct infrastructure assessment before they decide on Cloud placement strategy.
- Leverage SI partners’ expertise in laying down Cloud Economics for clients
Public Cloud services have a history of becoming both cheaper and more functional over time and it’s important to clearly state the math to clients who are considering repatriation, especially to ones whose decisions are fueled by recessionary headwinds. Hyperscalers can leverage SI partners to spread the message that even if they decide the public Cloud alone isn’t the ideal solution for a given workload today, it might become better in the future and it wouldn’t be wise to go through the trouble of repatriation only to discover a year or two down the road that it would have been better to stick with their original public Cloud architecture.
- Create risk awareness
Hyperscalers should clearly state the risks associated with Cloud repatriation to their customers. As Cloud repatriation tends to make architectures more complex by integrating public Cloud and on-premises resources, it’s important for customers of hyperscalers to make sure the extra management burden is worth it before they jump on the repatriation bandwagon.
With timely measures and helpful guiding, hyperscalers can retain their clients and revenue even in the face of a recession, and enterprises can enjoy the benefits of a long, sustainable Cloud strategy.
To build a sustainable Cloud strategy, get in touch with our experts at email@example.com
Speak with our consultants