Quick Guide for Controlling Hybrid Cloud Spending
Hybrid Cloud Landscape
Imagine a beautifully landscaped outdoor space at a hotel or resort being watered by a firehose rather than a regular garden hose or a sprinkler system. The firehose would drown the plants, the garden hose requires a lot of manual work, but the sprinkler system is automatically programmed to water the grounds at the right time of day, controls the direction and quantity of water, and requires minimal manual effort.
In this quick guide I’ll explain how controlling the flow of water can be compared to controlling hybrid cloud spending—that is, how much the resources are running as computing resources in whatever location they are running.
Consider that these hybrid cloud resources span on-premises, private cloud, and public cloud environments. There are multiple ways to control the flow of these resources for enterprise end users, much like a carefully programmed and managed sprinkler system. By not overwatering or overspending on hybrid cloud resources, enterprises achieve greater value from their digital business objectives.
Central Management
As enterprises grow, emerging technologies provide the ability to do things faster with fewer resources. This causes an inevitable tension between the role of core IT alongside the decentralized innovation efforts of the enterprise.
Teams responsible for innovation will often have access to new and emerging technologies without central IT oversight. Imagine many independent, well-meaning gardeners each taking care of a section of the grounds for a huge complex like Disney World, and not coordinating with each other. Each would have their own accounts for water, electric, landscaping, gardening suppliers, and more. They would water their sections on different schedules and possibly leave water running because there are no consequences for wasteful spending.
The same thing can happen with the multi-source, multicloud offerings for any hybrid cloud initiatives within a large enterprise. The temptation is real to have a number of teams doing their own thing, getting their own resources, and often aggravating the core IT teams who want to maintain standards for performance, security, and cost. In this situation, a centrally managed platform that provides access to the “tempting” resources can provide a much more efficient process.
For example, with a centrally managed hybrid cloud platform, an IT admin can set up connections to accounts for multiple public cloud providers like Amazon Web Services (AWS), Microsoft Azure, Google Compute Engine (GCE), and IBM Softlayer and provide the innovative teams with controlled access to right-fit resources from one place instead of creating logins and access to the multiple consoles provided by each vendor. They can also foster transparency by associating a cost for each of the resources used and then hold users accountable if wasteful spending occurs. This platform also provides faster access to centrally managed accounts, and in turn will drive innovation without the risk of resources running unattended or without governance.
Controlling Up and Down Time
Just as our gardeners can program a sprinkler system to be run only when it’s needed, you can programmatically control the running hours of your IT infrastructure. This is particularly important for public cloud resources that are running by the hour whether you are actually using their compute power or not. Idle virtual machines (VMs) running in your data center might not be a big issue, but VMs running in a public cloud without doing anything important are a huge waste of money.
Recently, one of my colleagues and I were working on a project in our Google Cloud account with a VM and I could quickly see that if I left this relatively small server resource running and tied to our account,I’d spend about $100/month. That might not seem like a lot, but imagine a large enterprise with hundreds or thousands of servers left running without being used.
In the above situation, I decided to manually turn the VM on and off on an ad hoc basis as I developed our project. Using a centrally managed platform like CloudBolt, this level of configuration could have been handled programmatically using Power Schedules.
You can configure the “on and off” times for individual servers or for specific groups of servers. This configuration setting can be done as VMs are deployed and set by the end users or IT admins can configure the on and off times so that end users do not specify anything—they just know that the servers they’ve provisioned are only available for use during business hours or for an even smaller window of development time. The turning on and off sequence can also be built in to properly shut down or more importantly turn back on a group of multi-tiered environments that have dependencies.
Showing Costs, Implementing Quotas, and Expiring Resources
Controlling the on and off times of running resources in public clouds has a big impact on controlling hybrid cloud spending, while other ways to control spending can involve a little more planning:
Showing Costs
Making sure that cost information is available for each of your hybrid cloud resources helps end users make better decisions about their spending before deploying resources. With a centrally managed platform, the costs can be the actual metered costs across your public cloud provider accounts or configured as relative costs among a set of resources that end users can select from. Taking it a step further, the IT admin can configure orchestration behind the scenes to select the best fit by criteria determined ahead of time or entered manually by the end user.
Implementing Quotas
Implementing quotas for specific sets of users can also control costs so that end users don’t overprovision resources. For example, a customer of CloudBolt’s sets a threshold quota of five servers per developer. Because they have potentially hundreds of developers who need servers, this relatively small planning effort saves the company several thousand dollars each month. If a developer needs more than five servers, their quota parameter sends an approval request instead of just letting the developer “open the firehose” of resources.
Expiring Resources
Expiration dates can also be specified similarly to quotas. The IT admin can specify an expiration date for all server resources provisioned by developers so if they eventually become dormant they do not run unnecessarily and rack up server costs for no reason. Again, a built-in stopgap, similar to the quota parameter can be implemented so end users can extend the expiration date with proper approvals.
Control the Firehose on Hybrid Cloud Landscapes
Controlling costs or taming firehose spending on hybrid cloud initiatives can have a significant impact on the value that enterprise IT delivers to an organization. Having control of hybrid cloud resources in the hands of centralized, enterprise IT can be a welcomed addition, especially for the teams who originally went out on their own innovation initiative without keeping a close eye on spending.
To learn more about how CloudBolt can help you manage your hybrid cloud spending, read our Solutions Overview or check out our Resource Center!
Related Blogs
FinOps Evolved: Key Insights from Day One of FinOps X Europe 2024
The FinOps Foundation’s flagship conference has kicked off in Europe, and it’s set to be a remarkable event. Attendees familiar…