There’s been an uptick in enterprise public cloud offerings from providers like Google Cloud Platform (GCP) and Microsoft Azure. Amazon Web Services (AWS) still dominates the market but GCP and Azure are catching up. GCP recently announced an offering for Kubernetes on-premises called GKE On-Prem, and Microsoft just announced new support for billions of internet of things (IoT) devices with Azure IoT Hub.

Public Clouds for Enterprises

These enterprise-focused public cloud offerings continue to allure IT leaders, many of whom have been reluctant to use them in the past. They are now rethinking their strategies to adopt a more hybrid cloud approach. They want the benefits of scalable, on-demand resources instead of going through a more traditional procurement process of scoping out and investing in the physical infrastructure.

This is great for new resources but what about everything else IT leaders manage?

Legacy Infrastructure

Enterprises typically have a significant footprint of on-premises resources scattered in data centers and remote offices as part of their “legacy” infrastructure. Some of the old or legacy stuff works just fine. For example, mainframes that run batch jobs overnight might not be worth messing with, especially if they can be easily integrated with newer resources. For an investment application, retirement funds don’t need to be updated in real-time; just one batch job at night will do the trick. Moving this processing to the cloud would probably not be beneficial.

On the other hand, application architectures that need to interact with mobile and IoT devices require newer, real-time scalable resources to accommodate user demand. Digital businesses like Uber and Lyft have very responsive mobile applications to meet fluctuating demands.

What does this mean for IT leaders?

Interoperable and Extensible

IT leaders must advocate for digital components that integrate with their complex digital ecosystems both legacy and new from public cloud providers. At a minimum, these components and solutions must be interoperable to share information easily, like the nightly updated information from a data center batch job to a mobile app that has computer processing in a public cloud. They must have the ability to be “networked” together on-premises, in the cloud, or from an IoT sensor. For example, Microsoft’s new Azure IoT hub mentioned at the beginning of this post provides this interoperability for new application initiatives and IoT devices.

In other cases, digital components and solutions must be extensible so that key features can be customized for specific technologies or use-cases. This typically means having an API defined so that specific actions can be scheduled or event-based. For example, a solution can publish a REST API so that another solution can trigger a specific task over the internet. A ticketing system will typically have a REST API so that they can open a new service request from another system. A monitoring system might use this ticketing system’s REST API to alert the IT department when memory usage goes above a threshold.   

A hybrid cloud strategy includes integrations for interoperability and benefits when components can also be extensible. The idea is that a one-size-fits-all approach becomes outdated quickly with emerging technologies. Hybrid cloud means right-fitting the digital components across on-premises, private cloud, and public cloud environments. The components must connect and interact efficiently.

Checklist for Hybrid Cloud Initiatives

Solutions that are part of a hybrid cloud initiative should meet these integration and extensibility requirements.

How CloudBolt Helps with Extensibility

CloudBolt meets all aspects of this checklist by supporting all major private and public cloud providers with built-in resource handlers to set up any hybrid cloud environment.

Here’s a list of the out-of-the-box resource handlers in CloudBolt:

CloudBolt also has an extensible plugin architecture that allows users to create plugins in the form of Python scripts, remote shell scripts, webhooks, and email notifications. Plugins can be triggered in response to job events, rules, and user actions on services.

The published API provides a way for other systems to run CloudBolt jobs without interacting with the graphical user interface (GUI) and CloudBolt can be managed remotely with command line utilities in Python.

To see CloudBolt’s extensibility in action sign up for a free demo of our software

Hybrid cloud initiatives are now part of most modern enterprises. As a result, effective approaches for anything from storage to big data as a service have expanded from being mainly for solution architects to becoming part of the enterprise-wide IT department. The main focus of any hybrid cloud initiative is to provide a best-fit deployment of IT elements in the right environment whether on premises, in a private cloud, or from a hosted public cloud environments. Now there are many architects designing the infrastructure.

Solution Architects

Solution architects typically handle the most critical, revenue-generating applications in the enterprise. Application performance is paramount for the enterprise to respond to digital business requirements, and their digital presence is now more important than the physical, a reversal of where business used to take place.

Think of the just the banking industry — almost all business has shifted from walk-in locations almost exclusively online. You can now sign and acquire a mortgage without stepping out of the house.

Solution architects must design for mobile apps, websites, and internet of things (IoT) devices that run the business and they must always be thinking about how to deliver value faster. They should be keen on innovating and reducing the amount of overhead required for a fine-tuned distributed application architecture that scales on demand.

Without a hybrid cloud approach, solution architects will fall behind the competition. Adopting a hybrid cloud approach works toward maximizing resources without compromising performance. The most successful solution architects figure out how to match the right capacity at the right time for who needs it the most.

One of the best practice implementations for hybrid cloud is to allow for bursting when demand spikes. With this architecture, a load balancer can direct some web traffic to a server location that might be scaling in a public cloud on demand. Some of the backend resources that support that app can also be scaled similarly.

IT Architects

IT architects work with the nuts and bolts of the enterprise, focusing mainly on the infrastructure and network capacity to serve the needs of all aspects of the private and public networking for all teams within the organization. Their work is associated with both the internal supporting applications and the external revenue-generating applications.

IT architects are typically the gatekeepers of all IT resources and must ensure quality and evaluate new technologies before implementing them in a production setting.

As many enterprises are looking to migrate all or parts of their data centers to public cloud provider environments, savvy IT architects adopt a hybrid cloud approach that recognizes which infrastructure is best suited for the cloud and what should remain on-premises. They want visibility and control so that they can address any IT issue without having to painstakingly track down additional problems that may arise from loosely governed ad-hoc environments, sometimes referred to as shadow IT.

Cloud Architects

Some enterprises have cloud architects who assist with both the solutions and IT architect teams. This dedicated role can emerge from either of these teams in order to provide more of a deep dive into understanding the specific cloud provider platforms so that they can make recommendations for application development and deployment, as well as options for storage and big data processing.  
Enterprises benefit from having the expertise of a cloud architect who can keep up with the latest trends in cloud computing technologies from multiple vendors. They can provision, test, and manage cloud solution infrastructure from any environment and can weigh the pros and cons of different cloud solutions. Cloud architects can be involved in any legal contractual negotiations with providers and the procurement process.

They’ll be who the enterprise calls to make that sure any hybrid cloud solution is designed with both security compliance and high performance in mind.

CloudBolt for Enterprise Hybrid Cloud Initiatives

Leaders in enterprise IT know that having a centralized platform for managing the provisioning of all IT resources can make a big difference in managing hybrid cloud complexity and delivering value faster.

CloudBolt provides an optimal hybrid cloud platform that is vendor agnostic and has a plugin architecture for extensibility to almost any environment that has an API. Any of the three architects described in this post can take advantage of the CloudBolt platform for centralized visibility and control of any resource they want to consider as part of the architecture that they manage.

To understand more about how CloudBolt’s platform can help any hybrid cloud architect, check out our Product Overview

For enterprise IT and hybrid cloud environments, IP address management (IPAM) solutions have become increasingly important, as both security and availability issues can take down the most critical aspects of any digital business. If IP addresses end up in the wrong hands or they are not properly managed and assigned throughout the enterprise, the results could be devastating.

IP addresses provide a unique identity to every single physical or logical node on a network, so that information can be sent to and from each device, real or virtual, and can be assigned by an IPAM or set manually to connect to a private or public network. The sooner IT administrators can recover from a breach based on an IP address issue the better. Otherwise, business is halted, and a troubleshooting nightmare begins.

DNS Overview: Names to IP Addresses

Enterprises use a domain name system (DNS) to translate their public domain name to an IP address. For example, if a company has an internet address on the web as “www.mycompany.com,” the DNS service translates that to a machine-readable public IP address *.

This public IP address then becomes the initial gatekeeper for all internet traffic going to the enterprise web servers or being sent from the enterprise web servers. This is all pretty simple so far, but there’s a lot more complexity behind that web server needed to deliver digital value from within the enterprise.

DHCP Service and IPAM Behind the Scenes

In addition to the public IP address that an enterprise has for their corporate website, there are usually thousands of private IP addresses associated with the enterprise, behind a firewall, and configured as one or more private networks and subnetting. This means that there’s not only traffic coming to and from the main website for the enterprise, but also lots of other traffic in an enterprise that never goes through the “digital storefront”, which finds its way around using private IP address configurations.

This enables the work of the digital business. Addressing helps file transfers and computer processing between on-premises servers, databases, applications, services, and internet of things (IoT) devices to all the public cloud provider infrastructure resources used by most enterprises, but not those physically located on any site.

Every endpoint in the enterprise must have an IP address unique to the network where it resides. This is all managed by an enterprise domain host controller protocol (DHCP) service. The endpoints that need IP addresses can be workstation computers, servers, switches, routers, load balancers, printers, and wireless devices, but that is by no means an exhaustive list. A DHCP service must be able to handle IP addressing without creating conflicts across the entire enterprise. Most enterprises turn to an IPAM to make sure that IP addressing is handled smoothly.

As with anything else in a modern digital ecosystem of interdependent resources, the more the IP addressing process is automated, the less room there is for errors.

How does this relate to a hybrid cloud management platform?

Imagine a scenario where different teams within a large organization relied on their own DNS or DHCP services, but did not coordinate across the whole enterprise. There might be different policies set for security, and the environments could be changing very quickly without any oversight. If any IP address has a conflict with another as a duplicate, the information flow stops.

If you’re not able to catch IP address issues before they impact end users, IT service requests start to pile up and critical digital work is halted. The ability to address and manage all of this complexity is best handled by an enterprise-grade IPAM such as those from Infoblox and Solarwinds.

At CloudBolt we help you make sure you manage all IP address configuration from one central location. This way, you’ll be able to provision all your infrastructure resources, such as load balancers, web servers, app servers, and database servers, so that they are properly addressed without a hitch.

We integrate out of the box with Infoblox or Solarwinds IPAM, and you can also specify another IPAM system with a plugin.

Some enterprises drop off the “www” to what is called a “naked” domain name for simplicity. However, most enterprises who do this will typically redirect a naked domain like Facebook.com to www.facebook.com for technical reasons, while some marketers believe the shortened domain name has more appeal.

For enterprise IT, just about anything can be scripted and automated for infrastructure to be provisioned and started, stopped, deleted, restored, replicated, and more.

The scripted part can be anything from a command line call to an API of the infrastructure provider to using a scripting program like Python to configure some logical actions. There are explicit configuration tools for part of the process to provision infrastructure, as well as full-blown, built-in support by cloud management platforms (CMPs) that can integrate with all of these scripting methods. This way, once configured the user points and clicks from a web-based user interface.   

At the enterprise level, a lot of developers have converged on these three configuration tools to do the upfront scripting that creates the logic to assemble and integrate infrastructure and application stacks for development, testing, and production:

Some enterprises use Terraform, an Infrastructure-as-code (IaC) approach, to get what used to be bare metal provisioning. This IaC platform treats the stand up of infrastructure much like the process used by developers to develop, test, and release code using an agile or continuous delivery model for DevOps initiatives.

There’s a catch, though, for enterprises — IT operations teams often end up managing the infrastructure after it has gone through development and any test environments. They will hopefully be able to manage and update in coordination with the development or DevOps teams. It does not always go smoothly.

The more teams within the organization are updating and introducing more automation technologies, the more complex it can get. What if the “tribal knowledge” of one team who does the automation is not the same approach as others? I’ve heard the expression, “skunk-works project” to refer to some of these environments that have been developed but are not standardized across the organization. Uncovering who is responsible for what and how it was developed can be a nightmare as teams change over time.

The benefits of automation, when implemented properly from a centrally managed platform like CloudBolt as an enterprise hybrid cloud platform, can have huge benefits for IT operations teams, infrastructure teams, and application development teams.

The benefits of a centrally managed hybrid cloud management platform will help:

A cloud management platform (CMP) is part of a larger cloud fabric orchestration strategy that typically helps enterprise IT control and manage the consumption of IT cloud-computing resources from a central location for end users. The idea is that there’s a lot of complexity that needs to be controlled. There are so many ever-changing on-premises, private cloud, and public cloud resources to update, manage, and deliver. 

You can think of it like who gets what in a large school cafeteria. Some get the planned meal in a serving line. Some kids bring their own lunch but might buy a snack or a drink. There might be a salad bar or an a la carte section where you order a sandwich. Payment happens with pre-paid lunch cards, voucher cards for guests or kids on free lunches, and then, of course, the typical cash payment options.

A CMP has to manage complexity much the same. It’s about choices, who gets what, and how do they pay.

Here’s a brief summary of the key aspects of an enterprise CMP:

Enterprise IT must consider the pros and cons of adopting a CMP that can handle the diverse environments from both a legacy IT infrastructure and private and public cloud resources. They should also consider their new digital business objectives enabled by provisioning and orchestrating IT resources from one or more environments, such as Microsoft Azure, Amazon Web Services (AWS), Google Compute Engine (GCE), Nutanix Acropolis, or VMware.

The most significant value that enterprises can gain from a CMP is to enable cloud fabric orchestration that is vendor agnostic and can connect and deliver all these resources to the end user. It is an easy way to get what you need quickly without getting bogged down in the process, and a step towards ensuring you get the visibility, control, and automation you need to keep cloud waste to a minimum and increase cloud value.

If you’re interested in getting the absolute most out of your cloud environment, a good CMP should be a part of your overall strategy to maximize your cloud ROI.

Ready to take your CMP to the next level? Request a demo or learn how CloudBolt can help solve your cloud ROI problem.

eBook

Ready to learn the truth about your cloud value?

In this whitepaper we expose the most common myths surrounding cloud cost management, and dive into how you can move beyond simplistic narratives and embrace a nuanced, value-driven approach to cloud cost efficiency.

Read whitepaper

Truth about cloud value ebook

Hybrid Cloud Landscape

Imagine a beautifully landscaped outdoor space at a hotel or resort being watered by a firehose rather than a regular garden hose or a sprinkler system. The firehose would drown the plants, the garden hose requires a lot of manual work, but the sprinkler system is automatically programmed to water the grounds at the right time of day, controls the direction and quantity of water, and requires minimal manual effort.

In this quick guide I’ll explain how controlling the flow of water can be compared to controlling hybrid cloud spending—that is, how much the resources are running as computing resources in whatever location they are running.  

Consider that these hybrid cloud resources span on-premises, private cloud, and public cloud environments. There are multiple ways to control the flow of these resources for enterprise end users, much like a carefully programmed and managed sprinkler system. By not overwatering or overspending on hybrid cloud resources, enterprises achieve greater value from their digital business objectives.

Central Management

As enterprises grow, emerging technologies provide the ability to do things faster with fewer resources. This causes an inevitable tension between the role of core IT alongside the decentralized innovation efforts of the enterprise.

Teams responsible for innovation will often have access to new and emerging technologies without central IT oversight. Imagine many independent, well-meaning gardeners each taking care of a section of the grounds for a huge complex like Disney World, and not coordinating with each other. Each would have their own accounts for water, electric, landscaping, gardening suppliers, and more. They would  water their sections on different schedules and possibly leave water running because there are no consequences for wasteful spending.

The same thing can happen with the multi-source, multicloud offerings for any hybrid cloud initiatives within a large enterprise. The temptation is real to have a number of teams doing their own thing, getting their own resources, and often aggravating the core IT teams who want to maintain standards for performance, security, and cost. In this situation, a centrally managed platform that provides access to the “tempting” resources can provide a much more efficient process.

For example, with a centrally managed hybrid cloud platform, an IT admin can set up connections to accounts for multiple public cloud providers like Amazon Web Services (AWS), Microsoft Azure, Google Compute Engine (GCE), and IBM Softlayer and provide the innovative teams with controlled access to right-fit resources from one place instead of creating logins and access to the multiple consoles provided by each vendor. They can also foster transparency by associating a cost for each of the resources used and then hold users accountable if wasteful spending occurs. This platform also provides faster access to centrally managed accounts, and in turn will drive innovation without the risk of resources running unattended or without governance.  

Controlling Up and Down Time

Just as our gardeners can program a sprinkler system to be run only when it’s needed, you can programmatically control the running hours of your IT infrastructure. This is particularly important for public cloud resources that are running by the hour whether you are actually using their compute power or not. Idle virtual machines (VMs) running in your data center might not be a big issue, but VMs running in a public cloud without doing anything important are a huge waste of money.

Recently, one of my colleagues and I were working on a project in our Google Cloud account with a VM and I could quickly see that if I left this relatively small server resource running and tied to our account,I’d spend about $100/month. That might not seem like a lot, but imagine a large enterprise with hundreds or thousands of servers left running without being used.

In the above situation, I decided to manually turn the VM on and off on an ad hoc basis as I developed our project. Using a centrally managed platform like CloudBolt, this level of configuration could have been handled programmatically using Power Schedules.

You can configure the “on and off” times for individual servers or for specific groups of servers. This configuration setting can be done as VMs are deployed and set by the end users or IT admins can configure the on and off times so that end users do not specify anything—they just know that the servers they’ve provisioned are only available for use during business hours or for an even smaller window of development time. The turning on and off sequence can also be built in to properly shut down or more importantly turn back on a group of multi-tiered environments that have dependencies.

Showing Costs, Implementing Quotas, and Expiring Resources

Controlling the on and off times of running resources in public clouds has a big impact on controlling hybrid cloud spending, while other ways to control spending can involve a little more planning:

Showing Costs

Making sure that cost information is available for each of your hybrid cloud resources helps end users make better decisions about their spending before deploying resources. With a centrally managed platform, the costs can be the actual metered costs across your public cloud provider accounts or configured as relative costs among a set of resources that end users can select from. Taking it a step further, the IT admin can configure orchestration behind the scenes to select the best fit by criteria determined ahead of time or entered manually by the end user.

Implementing Quotas

Implementing quotas for specific sets of users can also control costs so that end users don’t overprovision resources. For example, a customer of CloudBolt’s sets a threshold quota of five servers per developer. Because they have potentially hundreds of developers who need servers, this relatively small planning effort saves the company several thousand dollars each month. If a developer needs more than five servers, their quota parameter sends an approval request instead of just letting the developer “open the firehose” of resources.   

Expiring Resources

Expiration dates can also be specified similarly to quotas. The IT admin can specify an expiration date for all server resources provisioned by developers so if they eventually become dormant they do not run unnecessarily and rack up server costs for no reason. Again, a built-in stopgap, similar to the quota parameter can be implemented so end users can extend the expiration date with proper approvals.

Control the Firehose on Hybrid Cloud Landscapes  

Controlling costs or taming firehose spending on hybrid cloud initiatives can have a significant impact on the value that enterprise IT delivers to an organization. Having control of hybrid cloud resources in the hands of centralized, enterprise IT can be a welcomed addition, especially for the teams who originally went out on their own innovation initiative without keeping a close eye on spending.

 

To learn more about how CloudBolt can help you manage your hybrid cloud spending, read our Solutions Overview or check out our Resource Center

Docker Containerization

Many of us in enterprise IT hear about new technologies like Docker containerization and Kubernetes and think, “I could figure that out if I had time, but right now I’m too busy fighting fires and figuring out other stuff that’s more important.”

That’s been my initial experience with Docker and Docker Swarm for containerization technology. In order to understand Docker, a ship metaphor, portrayed in Docker’s logo, implies that the underlying operating system is the ship with all of the shipping containers as Docker containers. This is in contrast to a VM that has “cargo or apps” that have to stay with the operating system or specific ship “moving” along. It might be a bit of a stretch, but it helped me understand the basic concepts.

For Docker Swarm, I think of a “swarm of bees” roaming in concert with each other. Containers are orchestrated by the properties defined in a Docker Swarm—meanwhile, the queen bee is in the background managing the swarm. I describe this in more detail in my LinkedIn post, Who’s Swarming Now With Docker.

Kubernetes Orchestration

Many enterprise solution architects and developers are now using Kubernetes for orchestrating Docker containers. With this technology, at first I thought, “What the heck does Kubernetes have to do with these shipping or bee metaphors for IT orchestration?” Well, it turns out that Kubernetes is Greek for “helmsman” – and thus describes who is steering the ship. I’m thinking that Kubernetes conveniently claims a better metaphor. Think about it. A Docker swarm literally means a bunch of flying insects? You be the judge.

Kubernetes is emerging as the preferred enterprise orchestration tool for containerization because it tends to be more scalable by most standards… but let’s not play favorites. Here’s an unbiased rundown that compares Kubernetes vs Docker Swarm.

The key point is to make sure that you’re using technology that provides more business value for your overall digital business objective.

Kubernetes orchestration scales to meet the needs of fast-moving digital business objectives with an open source platform that allows developers and architects to modularize services that are resilient for large enterprises. Kubernetes also has the ability to orchestrate microservice communication with load balancing and networking that is easy and scalable to almost any environment.

CloudBolt and Kubernetes

CloudBolt supports Kubernetes orchestration for enterprises in the following ways:


For more information, check out our Resource Center or download our Product Overview

Join the CloudBolt Users Slack Workspace for real-time collaboration

Ever have that moment when you just wanted to know something fast… and you didn’t feel like submitting a question to an email alias or tech community and wait?

Well, it’s time to Slack! Customers can join the CloudBolt Users Slack Workspace to ask questions, share insights, and get to know each other in this online forum. Of course, CloudBolt support is always available to help, should you wish to enter a support ticket.

“We decided to leverage a Slack Workspace and give our customers a real-time forum to share best practices and collaborate, and find ways to automate challenging tasks using CloudBolt.”
– Bernard Sanders, CTO CloudBolt Software

How do I join the CloudBolt Users Slack Workspace?

All CloudBolt customers are eligible to join. Send an email request to slack@cbgcdev.wpengine.com and ask to join the CloudBolt Users Slack Workspace. You will receive a reply within one business day with further instructions.

Then what?

We’re excited to see the innovations this CloudBolt Users Slack Workspace will catalyze as our valued customers connect and share their ideas with each other.

Note—The CloudBolt Users Slack Workspace is not an alternative to CloudBolt Support. If you encounter an issue with CloudBolt, please submit a Support ticket and our experts will be happy to help you!

Not a customer (yet)?

Learn more about CloudBolt:

Our CloudBolt hybrid cloud management platform for enterprises helps IT admins provide simple to very complex IT resources to end users from a single console.

Balancing the risk and reward for delegating tasks has always been an issue for management or leadership positions within an organization. The goal of delegating is to make an organization work more efficiently. That’s the reward. But the risk is that stuff might not get done properly or it ends up being inefficient or unexpectedly costly for one reason or another. A lack of oversight or shortage of the right skill sets can quickly turn into a disaster.

The same is true for IT provisioning and self-service.

For IT services, there have been several ways to provision resources or “delegate” them as needed over the years. The degree to which it is “self” vs “someone-else” doing it as a service has varied and continues to evolve.

As a tech marketing engineer, getting my infrastructure resources for demo environments has ranged from requesting resources in an email to IT services, to accessing everything from a public cloud provider on my own.

For example, in my first technical role, I would have multiple remote desktop sessions running on my local machine from images requested from IT. My server images were for Windows 2008 Server (W2K8 R2 to be a little nostalgic), and I would install our enterprise software solution on top of them based on build candidates. Later, I had my own login credentials to a VMware vSphere client that let me spin up my own virtual machines (VMs) with server environments from multiple operating systems—specifying the compute capacity myself. I would take snapshots of incremental versions of working VMs and roll back to older versions if something went wrong.

Eventually, I was “self-provisioning” my environment running EC2 instances on Amazon Web Services (AWS) and paying for them with a corporate credit card. I logged in to the AWS portal with my personal login credentials and everything was paid for by the company. We had a marketing budget of about $1200 per month that I used our corporate American Express card for, only to be consolidated later by central IT to make the process more efficient.

On another occasion, I had to start an instance of a Linux Ubuntu server based on a repository of open source files from Github that I accessed myself. It should be interesting to see what happens as Microsoft is buying Github and is willing to pay 7.5 billion.

With any one of these scenarios, there was a self-service element to the task, and in some cases a specific login to an environment, where I could specify settings and then start running whatever it was that I needed. If all of this ran smoothly, there was nothing to worry about. “Self-service” was a good thing.

But now considering enterprise IT, to what extent do IT leaders want to provide self-service provisioning for employees to do digital business?

Today there is a lot more at stake, and it has become a balancing act of risk and reward based on timing, technology, security, cost, and possibly a whole lot more depending on what is being self-serviced. Whether or not my tech marketing IT provisioning scenarios went well or not was practically inconsequential, because it was just me and a few others that were impacted by the efficiency of my process.

Suppose there’s a much larger scale of several thousand employees or more, maybe even 10,000 employees at some point. A self-service strategy to have the right IT enterprise resources at the right time for the right digital task can have a huge impact on the overall success of the digital business underway.

Here are some ways that IT leaders enable a continuum of self-service IT:

CloudBolt is a cloud management platform for all clouds – internal and external – helping end users leverage compute, network, and storage resources from anywhere that is deemed appropriate for the enterprise. Enterprises can deploy infrastructure to run digital business services and applications when and where they need them across multiple private cloud and public resources to avoid vendor lock-in.