As you begin to adopt a hybrid cloud model, how you think about resources must change. In private cloud environments, you have already paid for the hardware, software, etc. However, in the public cloud, you are consuming and paying for everything by the pay-as-you-go model. 

Your accounting model, and how you think of your IT budget, must now change. On one hand, you do not want to spend everything on Day One or even lose your budget by not using it. Hint: buying reserved capacity can help you utilize your budget in an optimal way. 

Your finance team expects you to report on your capacity, usage, and performance differently since they want to keep a tab on everything. In fact, finance and IT teams need to partner in this journey as you redefine your value to your own business and IT consumers. 

Reliance on Key Performance Indicators (KPIs)

For better management of your hybrid cloud, use Key Performance Indicators (KPIs) to measure success. You need to be more proactive and not reactive on every front. This will help you not only realize the planned savings, use your budget efficiently but also integrate new technologies effectively for your business. For example, some of the KPIs you can use for your environment are:

CloudBolt For Complete Visibility 

Gaining complete visibility is one of the most basic and important milestones you should work towards as you move to a hybrid cloud environment. You need to understand your environment through different lenses (i.e. as a whole, by groups, applications, users, etc.). It is easier said than done, especially since you get data from your providers in a disparate format that may be difficult to consume and comprehend. 

You need a tool that can help you understand not only the broad strokes but also the nuances. It should also become a single-pane-of-glass for your team even with different sources and data types. For example, data from your AWS invoice, or data center information about hardware, software, heating, cooling, etc. That’s a lot of information to maintain in a worksheet. You need a reliable tool that can scale as your environment grows. 

CloudBolt provides comprehensive visibility and reporting capabilities for your private and public cloud environments. It provides a variety of reports and dashboards that you can use to gain granular insights into your organizations, applications, or users. You can also customize these reports further to slice and dice the same information in the most relevant format. The following image shows a dashboard for your environment which you can further customize (by groups, servers, owners, etc.). 

Example below is focused on the cost of servers. It is focused on all the groups for the month of March 2020. However, you can further categorize it for a particular group or application of your choice. This gives you the power to compare costs across groups or applications over a period of time. You can email this report to your finance team or export this as a comma-separated value (CSV) report for integration with existing tools if needed. 

CloudBolt simplifies cost management across clouds, data centers or other environments from a single platform.

To schedule a quick demo to see the cost features in action, click here.

A penny saved is a penny earned. Simple enough? Well, the same applies to your hybrid cloud environment as well. 

The more you save, the more money is freed up to try something new, or for that critical project that you have been pushing back for some time now. 

In this post, we will take a look at six simple things you can do to save on your hybrid cloud costs. Many of them are primarily applicable to the public cloud since you have already paid for the datacenter resources. However, 1GB saved in the data center is 1GB earned. Let’s jump right in. 

Zombie resources are still alive

Zombie resources get created when customers spin up resources such as virtual machines, or storage in public clouds and then forget about its existence. This can happen as resources get provisioned by mistake, for trial purposes or as resources get deployed in the wrong environments (production instead of dev, test). It can also happen thanks to shadow IT, as users use their corporate cards and completely forget about the provisioned resources.

While zombies can drain your IT budget, controlling them needs rigor. For example, once a month you should look into your resources and spot ones that are no longer needed. You can also set automated weekly/biweekly policies to find and destroy these zombies. Of all the other items, zombies are easier to spot and take care of if you follow a routine. 

Resources are kept ON 24/7

This is similar to you keeping the lights on in your garage for the entire night without noticing them. And just like electricity, the public cloud uses a pay-what-you-consume model. Thus, if you spin up all the resources and don’t turn them off when not in use (such as weekends or weeknights) you’re still paying a high price over a period of time without even realizing it. If you’re running a big team or have lots of applications, this can multiply quickly if you do not keep a tab on it. 

This is a very common mistake. While in the data center most resources are kept always on, in the public cloud that results in wastage of money that was easy to save. Pushing owners to turn off their resources when not in use is an easy solution. You can use reports and dashboards to understand the usage and spot the underutilized resources. This will not only help you save but will also affect your KPIs around resource efficiency. 

Hint: KPIs help you track resource usage/performance very easily. We will talk about this in the next blog post focused on cost. 

Unattached resources left orphaned

In a public cloud as you deploy certain resources such as virtual machines, the required storage is also deployed with it. Yet, as you spin down your virtual machine, the attached storage that came with it is left running. This means you have resources you may not be using, or that you don’t know exist. This can eat up into your bottom line as you keep using public clouds. Just imagine the impact if you’re a global company and don’t have the right measures to control this cost.

This needs extra attention. Developing reports and processes to check all the unattached objects is a healthy start. You can also drive financial transparency through a show back report. You can show the user how much money is getting spent on resources that are left unattached. Plus, you can give IT admins the ability to wind these resources down if they are left unattached for a certain period of time. 

Over-provisioning resources

Old habits die hard. If you’re coming from a data center environment, your developers and users might be provisioning more resources than they actually need. This might be a habit formed to avoid the delays in getting the required resources when they need them. Or this might be because they don’t need to pay for those resources. As you take these habits to the public cloud, they lead to a burning hole into your IT budget. 

In any case, you need to know how well utilized your resources are. For example, if resources such as storage are underutilized you should move to a less expensive tier of storage. You can do this by checking the resource usage by the peak, average and lowest usage. This will give you enough information to make conscious decisions to save more money. 

Limited accountability

Some organizations have the right intention behind empowering their developers. They want to move faster and disrupt industries. However, if you don’t have the right level of accountability in the environment, you’ll never find out who is having a party at your expense. This is commonly found in start-ups in stealth mode or rapidly-growing companies. 

You can only control what you can measure. It’s important to have the right tools in place to gain granular visibility of your environment. As a next step, develop KPIs that will help you set alert points. These KPIs will further help you drive more accountability that can help you control the cost.

Disaster recovery snapshots kept forever

It’s good practice to account for things that can go wrong. For example, taking snapshots to mitigate any disaster is critical especially if you’re using the public cloud for the first time. However, when these snapshots are kept for a long time (12 to 24 months), they cost a lot and can become outdated quickly. The worst part is you’re still paying for it. 

Keeping snapshots can be very helpful when things fall out of place and you need to keep going as opposed to starting from scratch. I have spoken with countless CTOs who sleep well because they have a disaster recovery strategy in place. However, a simple yet effective practice is to decide how long you want to backup your snapshots. Irrespective of where these snapshots are backed up, you are using resources. So, you should come up with a rule that anything older than a certain number of months would be deleted forever.

The time period to backup information will be influenced by which industry you belong to and if you have any regulations you need to follow. This monthly practice will keep freeing up resources you can use for something more strategic or for those in-house projects that need resources.  

In this post, we discussed six practices that can help you with cost savings in both public and private clouds. Remember: you are accountable for your environment. You can take some simple steps, save money, and free up more resources for your sandbox projects. In the next blog, we will cover some KPIs you can use to keep a tab on your costs.

Experience the leading hybrid cloud management and orchestration solution. Request a CloudBolt demo today.

Infrastructure as Code (IaC), as we have written about previously, is the process of replacing manual efforts and time required for IT resource management and provisioning by simple lines of code. IaC is being adopted by developers across organizations of all sizes to deploy resources quickly with a reduced management overhead. Resources can be scaled and deployed anywhere based on need and access.  

Terraform as an option for IaC

One of the key players in IaC space is HashiCorp’s Terraform. Terraform has been leveraged by enterprise customers as well as mid-market companies who want to transform their IT environment. It offers an open-source based solution for companies that want to empower their developers.

Solutions for Terraform and CloudBolt Customers

Joint Terraform and CloudBolt customers tell us their developers love the power of Terraform for infrastructure provisioning. However, IT operations teams want a platform that provides complete visibility and control. IT teams want a platform that is easy to use, does not have a steep learning curve and provides levers to control resources better. For these organizations, we have developed two key options that our customers can use.  

With the first option, through a new third-party plug-in for CloudBolt, users can now call and invoke a service action in CloudBolt simply by using Terraform. Developers can write their code straight into Terraform plans. Once they deploy that code it invokes an action in CloudBolt which finally invokes the service needed. It’s as simple as that. This option is focused on developers who want to continue using Terraform as the infrastructure management platform. By using CloudBolt in this process, IT admins maintain complete resource visibility and can support day-2 operations. 

With the second option, you can invoke a call into Terraform from CloudBolt blueprints to deploy the service that you need. This means you do not have to sign into the Terraform Command Line Interface (CLI) and you can take actions from the CloudBolt blueprints directly. Day-2 operations continue to be managed the same way as you would manage other hybrid cloud resources using CloudBolt.  

CloudBolt currently supports discovering virtual machines of the following resource types (built by Terraform providers):

I am very excited about both these options and the value they will bring to customers who want to leverage Terraform while leveraging the unparalleled governance capabilities of the CloudBolt platform.

If you want to see this in action, schedule a quick demo with our experts and they would be happy to showcase our Terraform support to you.

I am very happy to announce the general availability of CloudBolt version 9.2. In this release, you can empower IT and developers to deliver more value to the business while putting in less effort. You can simplify how a user can deploy their entire container-based applications and the supporting Kubernetes cluster anywhere. 

You can accelerate your Infrastructure-as-Code journey by using Terraform plans to initiate actions through CloudBolt in a hybrid cloud environment. Finally, you can innovate rapidly through CloudBolt integrations with IBM Cloud, IBM Cloud for Government, VMware Cloud on AWS and Red Hat OpenShift.

Let’s dive deeper into these features and see how you can benefit from them.

Simplify deployment of Kubernetes clusters and applications 

It is difficult to deploy and manage the Kubernetes clusters. Learning how to do it and then deploying Kubernetes clusters manually for every application is time-consuming and error-prone. An innovative enterprise deploying hundreds or thousands of applications in a short span can find doing this repeatedly even tougher. How can this be done quickly and with complete control? 

CloudBolt now enables users to deploy multi-node Kubernetes clusters for their containers environments with just a few clicks. Users can also deploy container-based applications into the provisioned Kubernetes clusters with ease. For instance, you can code and test multi-node applications easily by deploying them in the Kubernetes clusters from the CloudBolt platform in just minutes. Once testing is complete, the entire environment can be shut down with a click or brought back up again to test new iterations through the catalog. 

In other words, Kubernetes cluster deployment and management that used to take hours to deploy and hours/days to plan can now be done in just minutes. Thus, you can deliver value to your customers faster and get back to more strategic initiatives. 

Accelerate cloud journey using Terraform calls into CloudBolt 

Managing different tools in a multi-cloud world is a big challenge and can lead to management challenges like tool versioning, security settings, etc. Plus, everyday there are new tools getting announced. Do you have the bandwidth and the resources to manage the tools yourself? 

Through a new third-party plug-in for CloudBolt, users can now call and invoke a service action simply by using Terraform. Automating infrastructure actions like provisioning through Terraform plans was never this easy. This helps users stay consistent, track the resource usage and be more efficient while avoiding manual errors. They can also use CloudBolt blueprints to invoke actions into Terraform if that is a preferred option. 

CloudBolt platform also allows users to take day 2 actions like deleting resources. To simplify further, we have made the open-source code for this plug-in accessible from the GitHub repository. 

Provision IBM Cloud and IBM Cloud for Government resources with a few clicks

Do you spend hours deploying IBM Cloud and IBM Cloud for Government resources manually? Do you find it both time-consuming and cumbersome?

CloudBolt now provides new resource handlers for IBM Cloud and IBM Cloud for Government through simple blueprints. This integration makes it efficient to provision new services quickly in both the IBM Cloud and IBM Cloud for Government from the CloudBolt catalog. You can stay agile and deploy to the cloud of your choice with CloudBolt. 

Provision to VMware Cloud on AWS through self-service IT

Provisioning workloads through VMware Cloud (VMC) on AWS is not easy and has a steep learning curve. Managing both vCenter and AWS is not possible for everyone as they might not have time or the required skill sets.

Now, simply manage and provision VMC on AWS resources (such as compute, storage) using CloudBolt resource handlers. Migrate and manage workloads in VMC on AWS from one single portal without the need to manage two different cloud environments. Once in AWS, users can manage their resources through CloudBolt. We also provide an option of leveraging API support for this feature. 

Manage Red Hat OpenShift efficiently With CloudBolt blueprints

Have you tried using Red Hat OpenShift and faced challenges around integration, management, and automated actions?

CloudBolt’s new OpenShift blueprint allows users to discover, create, and delete OpenShift projects on pre-defined OpenShift clusters. Now manage OpenShift projects, groups objects (i.e. Pods, services, etc.), policies, constraints (i.e quotas), and service accounts through the CloudBolt catalog.

We would love to show these features in action and make your hybrid cloud journey simpler. Request a short demo today.

The evolution of the wheel has changed human society forever. As a result, we now need less time to travel. With the reduction of manual effort, we can focus on other essential activities. As innovation continued and more machines were invented, the need for management increased over time. 

Machines have been given tasks that were unsafe, repetitive and non-strategic while humans focused on strategic tasks. The need for data and analytics increased to better manage the machines. We invented servers, storage systems, and other hardware to increase the speed, reduce manual errors as we continued our analysis. 

However, with these advancements, the need for more manual management increased too. Manual processes in the data center were laborious, time-consuming and also error-prone. People used to manually fill up forms and take the approval of their managers, walk to every admin to show the list of resources they needed and then they were told that they will have to wait for days/weeks before they can get anything. This has continued for decades. Speed to market was affected and things got chaotic as the organizations grew rapidly. Plus, it was not self-sustaining as the IT admins could not get to the more important initiatives. 

Public clouds such as Amazon Web Services, Microsoft Azure, Google Cloud Platform can alleviate a lot of the pain in this regard. However, hardware management problems still remain especially for the sensitive data in your data centers even though they are reduced. Enter Infrastructure as a Code (IaC)! Before we understand its benefits and challenges, let’s define what IaC means. 

What is Infrastructure as Code?

IaC is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. This is a Wikipedia-based definition

In simple words, IaC is the process of replacing manual effort required for IT resource management and provisioning by simple lines of code. 

Now, there are two types of IaC methods: declarative and imperative. In the declarative approach, you declare what the desired end state should be and the system ensures that you get the desired outcome. The imperative approach entails you defining each and every step in the process explicitly to reach the desired end state. 

What are the Benefits of IaC?

Let’s take a closer look at what IaC gets your organization:

  1. Faster speed and consistency: The goal of IaC is to make things faster by eliminating manual processes and eliminating the slack in the process. A code-based approach makes it easier to get more done in less time. No need to wait on the IT Admin to manually complete the task at hand before he can get to the next one. This also means that you can iterate quickly and more often. Consistency is another vital benefit of IaC. You do not need to worry about tasks not being completed because it is a weekend or because your admin is focused on something else. Also, you can implement changes globally while keeping the same version of the software, etc. 
  2. Efficient software development lifecycle: IaC shifts the power into the developer’s hands. As the infrastructure provisioning becomes more reliable and consistent, developers can start focusing on application development more. Also, they can script once and use that code multiple times, thus, saving time and effort while keeping complete control.
  3. Reduced management overhead: In a data center world there was a need to have admins to govern and manage storage, networking, compute and other layers of hardware and middleware. IaC eliminates a need for these multiple roles. Those admins can now focus on identifying the next exciting technology they want to implement.

What are the key challenges for IaC?

Every coin has two sides. While IaC adds a lot of value to the IT environment, there are some challenges that cannot be overlooked. Remember to account for your unique IT situations that might make the following more or less relevant (like organization size, state, and your technology adoption lifecycle). 

  1. Coding language dependency: As I said earlier the power shifts to developers more. Similarly, since IaC is more code dependent you need to be an expert at coding. The learning curve for this can be steeper if you do not have a developer bench ready. Some of the languages used for IaC are JSON, HashiCorp Configuration Languages (HCL), YAML, Ruby, etc. The shortage of these skill sets can hamper your IaC usage potential. Also, is your strategy to move away from development and make things serverless? Think of the strategic direction in which you are heading before you jump into IaC. Maybe, IaC is a pit stop that you can avoid taking if your end goal is different.
  2. Security assessment processes: Your legacy security tools and processes might not be enough in the new world of IaC. You might have to manually check if the provisioned resources are operational and being used by the right applications. Although manual checking is a confidence-building step, it might take a lot of iterations to get your legacy security tools tuned to IaC. Also, consider that IaC is more dynamic than your existing provisioning and management processes. It can be used optimally or abused even faster. Therefore, you might need to take extra steps to ensure you’re establishing guardrails for complete governance.
  3. IaC monitoring can be challenging: In continuation of the point above, you might need additional tools to track who is provisioning what, where, how often and what is the cost of that. You might find it challenging to track the usage/capacity by your old monitoring tools such as worksheets. Moreover, if you are a global company you might need to think of better monitoring tools.

CloudBolt can help with your IaC needs. We can help you maintain the desired agility that IaC provides while keeping complete control and visibility.

Schedule a demo now to see CloudBolt’s IaC support in action.

Recently, mergers and acquisitions have been very common in the hybrid cloud space. On Jan. 9, Insight Partners announced its agreement to acquire Veeam Software for a whopping $5 billion.

This is a big deal in the cloud backup and data protection solutions space. Veeam, based out of Baar, Switzerland, is one of the backup and data management leaders along with leaders such as DellEMC, Veritas and IBM. 

What is Veeam? What is this acquisition about?

Veeam has been leading the cloud data management providers space in the Europe, Middle East and Africa markets. They have also been capturing the US market at a steady click and is currently valued at a $1 billion run rate. They have thousands of customers including more than 80% of Fortune 500 companies and have presence in more than 30 countries. 

The cloud data management industry has seen some consolidations recently. Establishing leadership in new regions and adding larger customers seems to be the idea behind the acquisition. In fact, Veeam’s new CEO William H. Largent said this acquisition will help Veeam to scale at an unparalleled pace (in the US and globally).

How is Veeam used by customers?

Organizations want to move to public clouds to gain the agility and flexibility at a pay-as-you-consume model that public clouds offer. However, migrating workloads and data to the cloud poses some unique challenges around data migration, data management and its protection in public clouds. 

Customers need to ensure the data quality, identify the right applications and account for data replication and disaster recovery. They need to finally analyze whether refactoring old applications for cloud makes sense or whether they should use cloud for greenfield applications only. 

Veeam has been instrumental in helping customers migrate and protect data across both private and public clouds. Its partnerships with a wide variety of hardware manufacturers, such as NetApp, HPE and public cloud providers such as Amazon Web Services (AWS) and Microsoft Azure has helped its customers immensely. With backup and disaster recovery solutions across the entire hybrid cloud, Veeam has provided a framework for data management.

However, to manage any hybrid cloud properly customers also need to have granular visibility of their environment. Cost management is another factor to keep in mind as hybrid cloud usage scales. Also, defining guardrails to establish boundaries without hampering agility is key to govern a hybrid cloud platform. We have seen customers struggle with these challenges as hybrid cloud is adopted rapidly by different teams and manual processes start to breakdown. To ensure that you stay ahead of these challenges you need a proven cloud management platform. 

How can CloudBolt help your Veeam integration and management?

CloudBolt is a trusted platform for customers who want to transform their IT environments. It integrates with solutions including Veeam to let users focus on more strategic activities by automating a lot of mundane tasks around orchestration through templates, establishing guardrails for consumption, etc. 

CloudBolt helps customers using Veeam with use cases including provisioning their Veeam tagged resources, running Veeam backup and replication policies on existing servers, automating recurring jobs, etc. All of this is done through simple UI extension in CloudBolt.

Hundreds of customers have made their cloud environments more efficient through unparalleled self-service, cost visibility and orchestration capabilities from CloudBolt. It can be deployed in minutes, is simple to use and easy to extend through API integrations; making it the most powerful tool in your infrastructure management arsenal.

Learn more about this integration and how CloudBolt can help you maximize your cloud environment.

Welcome to this week’s edition of CloudBolt’s Weekly CloudNews!

Earlier this week on our blog, we explored Cloud Solutions in the Multi-Cloud Era. We also officially announced the launch of CloudBolt 9.0—Cumulus.

With that, onto this week’s news:

Recent AWS Billing Error Points to Need for Partner-Led Cloud Management

Kelly Teal, Channel Futures, Oct. 15, 2019

“Telecom expense management has been a familiar part of the indirect channel landscape for at least a dozen years. But the need for its more evolved counterpart, cloud management, provisioned through partners, is becoming more apparent.

In late September, word quickly spread that Amazon Web Services had overbilled a number of customers throughout the world. It was an accident, and one that AWS caught and corrected right away. Nonetheless, the incident made clear that channel partners can, and should, play a larger role in their enterprise clients’ cloud management efforts.

And therein lies the real opportunity for channel partners. While there is legitimate reason to track and monitor cloud expenses for organizations, there is even more call to ensure they use what they buy, and control consumption.”

Nutanix Outlines Cloud Footprint Expansion As Talk Of Acquisition Rears Head

Antony Savvas, Data Economy, Oct. 11, 2019

“Nutanix is planning to increase its data centre footprint further to support the increased number of data storage, disaster recovery and cloud orchestration and management services it is planning in the near- to medium-term.

The company now generates the majority of its sales from software and services, which is a far cry from when it gained quick traction for its hyperconverged infrastucture (HCI) appliances a few years ago.

At the firm’s annual .NEXT EMEA customer and partner event in Copenhagen this week, Nutanix CEO Dheeraj Pandey (pictured) told Data Economy: “We will expand our cloud reach as the compute has to come to the customer data now being generated in the cloud and at the edge.”

He said this would involve widening the company’s partnerships with data centre operators and public cloud providers to increase the number of the firm’s cloud regions, to make sure services more easily comply with data laws such as GDPR and data sovereignty demands from both enterprises and governments.”

CenturyLink Adds Google to Cloud Destinations

Edward Gately, Channel Partners Online, Oct 15, 2019

CenturyLink has expanded its Cloud Connect Dynamic Connections service to Google Cloud Platform.

The move provides a new option for connecting business premises and public data centers to cloud environments. It allows self-serve, real-time, dedicated network connectivity across thousands of endpoints in North America, Asia Pacific and Europe through CenturyLink’s global fiber network.

Chris McReynolds, CenturyLink‘s vice president of core network services, tells Channel Partners the top three public cloud providers are Google, Microsoft and Amazon, and adding Google to his company’s cloud destinations for Dynamic Connections is “key to supporting the customers’ need to move and stand up workloads when they want and where they want.”

Dale Carnegie taught our grandparents 80 years ago that a “person’s name is to that person the sweetest sound in any language.” For IT organizations the sweetest sound is often…a hostname.

(more…)