The terms “workload” and “application” are sometimes thrown around interchangeably, but they are not the same, and it’s important to understand the difference.
An application is a set of functions that serve a business purpose. A workload is the stress placed on the infrastructure by that application. For example, when a customer calls a company’s support line, their account information is on the screen when they’re connected to a support representative. To make this happen, the customer support application runs in the background to look up the incoming phone number, collect and present the associated account details, and display that information on the representative’s screen. The whole process touches code that runs on operating system(s), which sit on virtual machines (or bare-metal servers), which rely on storage subsystems and networking subsystems to communicate. The effort needed to support this application—CPU, memory, network, and disk consumption—is what I consider a workload.
Each application has a workload fingerprint. When the application was initially built, the developers generally knew how much horsepower was needed to make it work. For instance, they knew the application in our example had to support 500 customer service reps who average one phone call every 2 ½ minutes. They can determine how many queries will be executed on the database, how many rows will be returned, how much disk will be accessed, how much CPU is consumed to crunch the numbers, etc., and spec out the infrastructure needed to support the application. Pre-production testing can aid in obtaining a more accurate workload fingerprint; rather than guessing at consumption values, a developer can run tests that will simulate production.
Workload placement considerations
When most people think about workload placement, they are looking for the optimal place to run that workload. Some companies are only mature enough to find space available while others are looking for the most efficient place to run the workload, efficiency being a code word for “cost effective.” (If your thoughts jumped to “run slow apps on slow disk,” you get my point.) However, you need to consider more than just cost.
For example, cost will be less of an issue for a mission-critical workload that absolutely has to be lightning fast. In that case, you should optimize on performance. Some workloads don’t have to be fast but they do have to be always up, so those should be optimized on availability or redundancy. You need to understand the critical dimensions and then stack rank your workloads accordingly. There are three dimensions you could optimize toward: cost, performance, and availability. Each has its own implications when it comes to placement decisions.
Evaluating placement opportunities
Once you’ve characterized what an optimized application means (and each app is different!), you need to look at what’s available in your infrastructure. The key word here is available. It’s not enough to just look at what’s currently free—that’s a myopic way of viewing your infrastructure. Workloads are granted rights to resources that they may or may not be consuming at the present moment.
Imagine a scenario where a hyper-critical application’s workload is busy from 6 am to 6 pm, Monday through Friday. You have placed it on expensive hardware. But for 12 hours each weekday, and 24 hours over the weekend, that expensive gear sits relatively idle. If a critical batch-processing workload appears, you must consider co-locating those two workloads together. What this means is that at any given moment, all of your options are on the table, and you need to consider the broadest set of permutations to truly optimize your infrastructure.
This is not a one-time exercise
Conditions change over time. You need to understand your workload needs on an ongoing basis and see how you’re trending. When that customer support team grows to 750 reps, and process improvements drive down the call average to one every 2 minutes, there’s an impact to your infrastructure. The application hasn’t changed, but the workload has. This so-called workload drift could require you to adjust your infrastructure to keep performance at expected levels, and to avoid surprise costs if you’re running in a cloud environment. And yes, this means your infrastructure architect’s job is never truly finished, and they need a way to constantly probe for better ways to run the workloads they have.
Keeping workloads optimized
If this sounds like a lot of work, it is—which is why you need tools to automate the process. And that’s exactly what the Virtana Platform helps you do. Our Optimize module helps you characterize your workloads, evaluate your placement options, and recommend improvements based on your priorities—on premises and in your public clouds.
If you’re ready to #KnowBeforeYouGo, contact us to request a Virtana Platform demo.