Cloud migration is, more often than not, treated as a one-way street where organizations migrate applications and workloads from on-premises to a public cloud, or less often, from one public cloud to another. But a key finding in our recent State of Hybrid Cloud survey of 350 IT professionals with cloud decision influence/authority is that a whopping 72% of participating organizations stated that they’ve had to move applications back on-premises after migrating them to the public cloud. Such repatriation doesn’t necessarily mean there’s an issue, but cloud migrations require a lot of time and effort, neither of which most IT groups have a lot of. When you have more than just a handful of cases doing this—and three-quarters of respondents is a significant number—it raises some eyebrows. It’s important to distinguish between a strategic repatriation undertaken to support evolving business needs and a rollback to undo challenges that are caused by inadequate migration planning. Tweet this
It turns out that there’s not just one thing driving disruptive (i.e., not strategic and purposeful) repatriation. Respondents cited various issues, including:
- Migration of applications that should have stayed on-premises (41%)
- Technical issues with provisioning for the public cloud (36%)
- Application performance degradation (29%)
- Wrong public cloud provider selection (21%)
- Unexpected costs (20%)
In fact, one-third of the respondents cited two or more reasons for the rollback, and 12% experienced three or more of these issues.
This doesn’t have to happen. Here’s how you can avoid these costly and disruptive cloud migration mistakes.
How to avoid migrating applications that should stay on-premises
Not every workload should be migrated to a public cloud, which is why many organizations opt for a hybrid cloud approach, keeping some part of their estate on-premises. You need to consider various attributes of different workloads—e.g., data, back end, privacy and security requirements—and their inherent suitability for a public cloud environment. (Several examples are explored in more detail here.) You also need to have clear goals and priorities so you can make the best decision to support your business needs—and you must communicate those objectives and decisions with everyone involved in the cloud migration process. Finally, you should understand the detailed health, utilization, and performance characteristics of your workloads in the data center. This baseline provides critical information to help you make migrate-or-stay decisions.
How to avoid technical issues with public cloud provisioning
The technical challenges of public cloud provisioning are due in no small part to the sheer number of configuration options available. Added to that is the fact that most of the guidance available focuses primarily on CPU utilization. While important, it’s not the only factor to consider. You must also take other computing dimensions—such as memory usage, IOPS, and network bandwidth—into account. Additionally, you may have certain conditions that need to be factored in, such as pre-paid reservation commitments or certain types of VMs you want to avoid, to name just two examples. All of these puzzle pieces have to fit together and if they don’t, you’re going to run into problems. You can head this off at the pass with an automated recommendation engine that looks at all the critical dimensions of your workloads over time, within constraints that you set, to provide you with a manageable short list of configurations. From there, you can perform what-if analysis to find the optimal combination. When it comes time for provisioning, you’ll have a no-surprises process.
How to avoid application performance degradation in the cloud
Any benefits you gain by moving applications to the cloud are outstripped if performance slows to an unacceptable level. To prevent this from happening, you first need to create a baseline of on-premises performance, and model representative workloads in your public cloud configurations—before any migration work begins. Baselines are critical because they provide a reference point for comparing workload utilization and performance in the cloud. You need to ensure your baselines reflect any seasonality to give you the most complete view of the health, utilization, and performance characteristics of your workloads in your on-premises infrastructure. Then after you’ve moved your applications to the target cloud, you must optimize them to keep it rightsized for performance and cost.
How to avoid selecting the wrong public cloud provider for your needs
Cloud providers are not all the same, and picking the wrong one could be an expensive and disruptive mistake. You may have specific requirements to help you narrow the list of potential providers, but how do you make an apples-to-apples comparison to ultimately select the provider who will deliver the best performance at the lowest cost? The key is to use an automated recommendation engine, as described above, to build the list of comparable configurations for each provider, and then “play back” those workloads before making any commitments.
How to avoid unexpected public cloud costs
No one wants a nasty surprise in their end-of-month cloud bill, but it happens all the time. Whether due to an unplanned surge or an accumulation of workload shifts over time, you want to understand potentially costly changes before they bust your budget. To do this, you need visibility to see potential problems before they add up, along with ongoing cloud optimization capabilities to safely adjust resources to save on your bill without risking performance.
Know before you go with Virtana
In short, the key to avoiding all of these public cloud challenges—and the unnecessary repatriation of applications—is to #KnowBeforeYouGo. Virtana Platform makes this possible by providing intelligent observability into which workloads to migrate, and ensuring that unexpected costs and performance degradation are avoided once workloads are operating in the cloud.