We’ve seen this movie before

I believe that the evolution to hybrid cloud is inevitable. Not because it’s grabbing headlines, but because it mirrors the industry’s history of new technology adoption. Take the evolution of virtualization, for example.

Going back 20 years give or take, virtual machines popularized by VMware, KVM, and Hyper-V started to gain traction. The value they provided was that applications were no longer tied directly to bare-metal servers, freeing them from specific hardware and enabling data mobility. When it first came out, IT teams weren’t 100% comfortable with putting their critical workloads into a virtualized environment. So, they started with development and other non-production workloads as a test. It wasn’t a process challenge—vMotion enabled moves while the application is running with no interruption in service, a so-called “live migration”—but a matter of trust.

Over time, as they built confidence in the technology, people started moving more and more of their workloads into a virtualized environment. Many companies adopted a “virtualized-first” mentality, where any new application had to be built on a virtual machine rather than on a bare-metal server. But this went too far. An extreme case: one CTO’s bonus was tied to the percentage of the environment that he could get virtualized in a year. As a result, his IT staff was virtualizing a whole host of things that never should have been virtualized. For example, multiple SQL servers in a cluster all became virtual machines on the same physical box, running counter to the entire concept of having a cluster.

The cloud path is covering familiar ground

The cloud story also started with some hesitation. People thought it was cool, but were cautious. They understood the value—that you don’t have to pay for and manage data center cooling, electricity, etc.—but it meant putting their stuff on someone else’s computer. Just like with virtualization, IT teams dipped their toes in the cloud water by migrating just their non-production and other less-critical workloads.

As companies started to get more comfortable, they began moving more and more to the cloud. Many even adopted a cloud-first strategy—until they realized that not all applications are well suited to that infrastructure. Repatriation—the term for rolling back workloads on-premises after migrating them to the cloud—started happening. In fact, in a recent survey, we found that 72% of organizations stated that they’ve had to move applications back on-premises after migrating them to the public cloud—and 41% of those repatriations happened because the applications should have never been moved in the first place. Instead of cloud-first, companies are now following a cloud-smart strategy. That’s because right now, cloud migration isn’t VMotion-easy—you can’t just pick up a workload and toss it in the cloud and expect to see good results. You’ll likely get hit with unanticipated costs because you’re essentially flying blind.

The whole idea behind Virtana Migrate is to enable you to understand which of your applications are easy to move and which will pose challenges—no matter how many thousands of applications you have. It turns out that some workloads, by sheer luck, just happen to be pre-optimized for the cloud and are ready to go, and we’ll tell you which those are. We’ll also tell you which applications are difficult to migrate, so you might want to think hard about moving them to the cloud. The reality is that not every workload is cloud-perfect, which means that some applications, or parts of an application, will remain on-premises while others reside in the cloud.

Hybrid cloud creates a plot twist

Most IT shops will shift to a hybrid cloud world over the next couple of years, requiring them to manage workloads both on-premises and in the public cloud—and possibly multiple public clouds. And here’s where the story starts to look a little different from the virtualization one. Unlike virtual machines, cloud technologies can be foreign to compute teams. For example, you don’t have the concept of burst reservations in VMware. IT teams are at an inflection point from a management perspective. But what’s the best way to build and operationalize cloud computing expertise? One option is to build a new technology silo of expertise, but it’s unlikely companies have an appetite for that. They remember the pain of the days when there were database people, storage people, network people, and compute people, and whenever there was a problem, they’d all point fingers at each other. It’s why war rooms to find root cause became popular.

Instead, organizations need to start treating everything like a cloud, managing the compute layer like a collection of resources, one you own and one you pay to use, with both as viable options. Further, both types of cloud—public and private—should be a commodity. They’re just horsepower for you to use. Which horse do you use for which workload? That’s the trick. In other words, does a particular application, or piece of an application, demand to be on-premises or should it move to the cloud?

It goes further than simply private versus public cloud. Public cloud providers are all horses of a different color with particular strengths. For example, Oracle Cloud is better at databases, which is not surprising given the company’s heritage. Azure is particularly good at integrating corporate authentication standards; again, this is logical given Microsoft’s LDAP and Active Directory legacy. Out of the block, AWS is a good all-around option. GCP is quickly becoming the go-to for Kubernetes and containers. Of course, this isn’t all black and white. It’s not to say that, for example, AWS is bad at LDAP; they’re both good but Azure has a slight edge in that area.

The point is that different clouds can be used for different reasons—and, of course, there’s cost to consider—so IT teams need to understand those differences and make informed decisions.

Workload fluidity is the future

Back in the early days of virtualization, moving OSes required change control and approvals. But there’s now a setting in VMware called Dynamic Resource Scheduler (DRS) that does just that, and no one bats an eye anymore. I believe that’s where cloud compute is headed. We’re going to be moving workloads from private to public, from public to a different public, and from public back to private—and it will be frictionless. When that motion becomes commonplace, it will all boil down to simply which cloud is the best place for any given workload. To understand that, you need to understand your workloads at a very granular level. (In fact, Virtana’s been doing that for more than a decade.)

In a world where workload placement is everything, workload fluidity is critical so you can easily, and even automatically, move workloads around continually to where they’re best suited. Imagine being able to move workloads to wherever it’s cheapest while still meeting performance requirements—even on a daily basis. That capability is what VMotion and DRS brought to virtualization, and it’s what Virtana is building for the hybrid cloud.

If you’re ready to #KnowBeforeYouGo, contact us to request a Virtana Platform demo.

More Cloud Migration Considerations

Ryan Perkowski
Ryan Perkowski
Hybrid Cloud
February 13 2024Virtana Insight
The Path Ahead for VMware CloudHealth Customers
Broadcom’s acquisition of VMware has closed—now what? Part 2: Options for VMWare CloudHealt...
Read More
Cloud
August 15 2023Virtana Insight
Maximizing Efficiency and Savings: Explore Virtana’s Latest Innovations to IPM and Cloud Cost Management
In today’s rapidly evolving business landscape, where IT infrastructure and cloud costs pla...
Read More
Cloud Migration
June 27 2023Ricardo Negrete
Should I Stay or Should I Go? Get smarter about your refresh cycles
The big migration questions you need to ask—and answer Deciding what to migrate, what to...
Read More