The FinOps Foundation’s flagship conference has kicked off in Europe, and it’s set to be a remarkable event. Attendees familiar with the San Diego event will feel right at home, just with the Mediterranean as the backdrop instead of the Pacific. But even those new to the FinOps scene will quickly find themselves among friends, both familiar and new.

The inaugural FinOps X Europe conference has brought the vibrant energy of San Diego to Barcelona’s stunning beachfront, marking a significant milestone in the global expansion of the FinOps movement. As the cloud financial management landscape continues to evolve, several transformative trends have emerged that are reshaping how organizations approach cloud spending and optimization. Here is what you need to know from the first day.

The Expanding Scope of FinOps

A major revelation from the conference was that FinOps practices are no longer confined to public cloud spend. J.R. Storment, Executive Director of the FinOps Foundation, shared early data from the State of FinOps survey that made clear that many FinOps teams are already being asked to play strategic roles in SaaS, Licensing, and Datacenter management.

Beyond Public Cloud

“Instead of cloud teams being siloed off, they’re doing FinOps over here in the corner. They become part of the larger integration and help the business move faster,” Storment explained. As cloud spend becomes a larger share of IT budgets, FinOps teams are increasingly collaborating with IT finance, procurement, and software asset management functions to provide a unified view of technology spend.

FOCUS 1.1. is Here

A major highlight will be the deep dive into FOCUS™ 1.1, the latest FinOps Open Cost and Usage Specification version. Released just days before the conference FOCUS 1.1 introduces enhancements that allow FinOps practitioners to perform more granular multi-cloud analysis. Key updates include new columns for tracking capacity reservations, commitment discounts, service subcategories, and SKU data. Sessions will explore how these updates improve the precision of cloud cost management and enable more effective decision-making. With incremental releases planned every six months to expand support for SaaS datasets and other IT costs, adopting FOCUS is a key topic for vendors and practitioners.

AI Integration and Cost Management

Another key theme was applying FinOps principles to managing the cost and value of artificial intelligence and machine learning initiatives. As Storment pointed out, the question is not just what AI can do for FinOps, but “how you apply the FinOps practice to your AI so that you can ensure the business value.”

AI Implementation Patterns

With the rapid adoption of cloud-based AI services, FinOps teams need to help business leaders evaluate trade-offs between cost, performance, and business impact of different AI implementation options, from SaaS AI services to specialized hardware like GPUs. Optimizing usage and workload placement will be critical FinOps activities.

Shift (all the way) Left

FinOps is “shifting left” to the earliest stages of product development to enable data-driven build vs. buy decisions. Rather than being parachuted in post-deployment to optimize costs, FinOps teams are partnering with product owners to model TCO of different architectural choices and challenge assumptions early.

Earlier Integration

“At the phase of figuring out what business problem we’re solving, what challenge we’re solving, what value we need to bring, the FinOps team is coming in to help model the potential cost across those different types of technology,” explained Storment. This underscores the increasingly strategic nature of the FinOps role.

Ready for the Challenge

CloudBolt, having attended FinOps conferences for the past two years, wouldn’t miss this inaugural European event. As a Gold Sponsor, CloudBolt will be at the heart of the action, ready to tackle the expanding scope of FinOps head-on.

With recent announcements like the partnership with CloudEagle for SaaS management and the launch of CloudBolt Agent for private cloud and data center visibility, CloudBolt is proving its commitment to providing a comprehensive FinOps solution. Add to that their early bets on being 100% FOCUS-ready, and it’s clear that CloudBolt is at the forefront of the FinOps evolution.

In fact, CloudBolt’s Chief Product Officer, Kyle Campos, will be leading an interactive chalk talk on AI in FinOps. This collaborative deep dive will cover AI/ML applications, strategies for maximizing ROI, and how FOCUS accelerates AI adoption. It’s a session not to be missed. For those ready to see CloudBolt in action, their product experts will be on hand to provide personalized demos showcasing their upcoming features, including conversational AI for cost management, AI-driven optimization insights, and advanced automation to close the “insight to action” gap.

Looking Forward

FinOps X painted an exciting portrait of the future of cloud financial management. As FinOps practices mature and expand, they will play an increasingly central role in shaping enterprise technology strategy and optimizing cloud investments. Practitioners will need to embrace new skills and stakeholder relationships to manage this broadening scope.

Ultimately, FinOps remains a people-centric, collaborative discipline focused on maximizing the business value of cloud. “The decisions can’t be made in a vacuum,” stressed Storment. “The business value comes out of the people, the collaboration of the processes, the time they get into the decision making happens.”

One thing is clear: FinOps is becoming an indispensable capability for any organization seeking to maximize the value of cloud. Finance, engineering, and business leaders should look to strengthen their FinOps muscles to enable more real-time, data-driven decision making and collaboration in the cloud era.

Additionally, the new Use Case Query Sandbox is now live at focus.finops.org/sandbox. This sandbox allows users to execute Use-Case SQL queries against sample data from cloud providers and view the results.

Ready to learn more? Schedule a quick chat.

Buckle up, folks! The rapid evolution of cloud services and the rise of generative AI are reshaping how organizations approach technology adoption in 2024. As businesses saddle up for this wild ride, they’ve got to balance the game-changing potential of AI with the challenges of wrangling costs and complexity in an ever-shifting environment. In this no-holds-barred exposé, CloudBolt explores the key trends shaping cloud service adoption, focusing on how generative AI is shaking things up and what you need to do to stay ahead of the pack.

Generative AI: The New Gold Rush

Generative AI, powered by large language models like GPT-4 and PaLM, is driving a significant shift in cloud adoption patterns. The immense computational requirements of these models, often relying on specialized hardware like GPUs and TPUs, are pushing businesses to leverage cloud infrastructure to access the necessary resources without substantial upfront investments.

Cloud providers have responded by offering AI-as-a-Service (AIaaS) solutions, democratizing access to powerful AI capabilities. Cloud providers, always ready to seize an opportunity, have also rolled out AI-as-a-Service (AIaaS) solutions, making AI accessible to the masses. But hold onto your wallets—the insatiable appetite of AI workloads is driving up cloud spending, forcing companies to get creative and adopt new cost management strategies.

According to McKinsey’s podcast “Rewiring for the era of gen AI,” only about 10% of companies are realizing significant value from generative AI so far in 2023, with many getting stuck in “death by a thousand pilots” without scaling impact. This highlights the importance of effectively managing costs and scaling AI initiatives to drive real business value.

The Good, the Bad, and the Costly: AI Workload Management

According to Forrester’s Technology & Security Predictions 2025, many enterprises will prematurely scale back AI investments due to ROI pressure, with 49% expecting ROI within 1-3 years and 44% within 3-5 years.

The dynamic nature of AI workloads presents unique challenges for cost management in the cloud. The pay-as-you-go model, while offering flexibility, can lead to unpredictable expenses as AI workloads require dynamic scaling based on fluctuating demand. Over-provisioning resources or underestimating capacity needs can result in either wasted spend or performance bottlenecks.

To make matters worse, GPU shortages have turned scaling generative AI applications into a game of musical chairs. Prices are skyrocketing, and wait times are longer than a DMV line. GPU capacity limitations have created additional hurdles for businesses looking to scale their generative AI applications. The scarcity of GPU resources has led to higher costs and longer wait times for access. This necessitates careful capacity planning, a practice not typically associated with traditional cloud services but now essential for ensuring consistent AI workload performance.

Capacity Options: Pick Your Poison

Organizations have two primary options when procuring capacity for AI workloads:

  1. Shared/On-Demand Capacity: It’s the “live fast, die young” approach—flexible, but with a side of unreliable performance when everyone’s trying to grab a slice of the pie. This pay-as-you-go model offers flexibility but comes with the risk of unreliable performance during peak demand periods.
  2. Dedicated/Provisioned Capacity: The “slow and steady wins the race” option—consistent performance and guaranteed resources, but you’d better be ready to commit long-term and fork over some serious cash upfront. This option guarantees consistent performance with reserved access to resources but requires long-term commitments and higher upfront costs.

Choosing between these options involves complex tradeoffs between cost and performance. In essence, it is somewhat like navigating a minefield—one wrong move, and your costs can blow up in your face. Dedicated capacity might keep things running smoothly, but you could end up paying for resources you don’t need when demand takes a nosedive.

Gunfight at the Token Corral

Generative AI services introduce an additional layer of complexity through token-based billing. Tokens represent the data processed by an AI model, with different models consuming tokens at varying rates. This abstraction complicates cost management, as predicting token consumption over time becomes challenging, making it difficult to align budget forecasts with actual usage.

Best Practices for Managing Cloud Costs in the Age of AI

To effectively navigate these challenges, organizations should adopt several best practices:

  1. Regular Load Testing: Constantly put your AI workloads through their paces to make sure they can handle whatever’s thrown their way without blowing your budget.
  2. FinOps and Engineering Collaboration: Get your financial gurus and machine learning masterminds working hand-in-hand to optimize resources and keep costs in check.
  3. Failover Logic Between Capacity Types: Make sure you’ve got a failsafe to switch between dedicated and shared capacity when the going gets tough.
  4. Incremental Scaling: Starting with small-scale experiments before committing significant resources helps identify potential issues early and reduces financial risk. Dip your toes in the water with small-scale experiments before diving in headfirst—it’ll save you from a world of financial hurt.

Blazing Trails and Managing the Books: Balancing AI Innovation and Cost Management

In the era of generative AI, organizations must adopt a proactive approach that balances the pursuit of innovation with robust financial oversight. Key strategies include:

  1. AI-Specific Cost Management: Implementing AI-centric cost management practices, such as real-time usage tracking, performance metrics analysis, and cost allocation tagging, is essential for optimizing spend.
  2. Collaborative Governance: Establishing clear governance frameworks that foster collaboration between FinOps, engineering, and business teams is critical for aligning AI initiatives with organizational goals and ensuring responsible AI adoption.
  3. Continuous Optimization: Regularly reviewing and adjusting cost management strategies based on evolving AI landscape and business needs is crucial for maintaining a competitive edge.
  4. (Optional) A Note on the Multi-Cloud, Hybrid Cloud Shuffle: As costs climb and flexibility becomes the name of the game, more and more organizations are embracing the multi-cloud or hybrid cloud tango. By spreading workloads across multiple cloud providers or mixing and matching public and private infrastructure, businesses can strike a balance between performance and cost while avoiding vendor lock-in.

Conclusion

The transformative power of generative AI is both a blessing and a curse, offering unparalleled opportunities while presenting new challenges in cost management and operational complexity. But fear not, intrepid adventurers! By embracing best practices like load testing, collaborative governance, and continuous optimization, you can harness the power of AI while keeping costs in check and staying ahead of the competition. The key is to adopt a holistic approach that brings together people, processes, and technology, empowering your business to thrive in the wild west of generative AI.

Click here to learn more about AI and Augmented FinOps.

Introduction

As cloud costs continue to rise, comprising an ever-larger share of IT budgets, there is increasing executive scrutiny on demonstrating a return on investment from cloud spend. However, for many organizations, realizing the full business value of the cloud has been constrained by manual processes, limited data, and misaligned teams.

In this blog series, we will explore some of the key cloud financial management challenges and discuss potential technology solutions to help address these gaps. By taking an augmented approach that layers intelligent automation, comprehensive data, and cross-functional alignment on top of cloud FinOps foundations, organizations can break through barriers to maximize value.

Read our in-depth guide to learn best practices for cloud financial management.

Challenge # 1: Manual Processes

A significant finding from recent state of FinOps reports is that despite strong buy-in and prioritization of FinOps, much of the day-to-day operation remains manual. As FinOps analyst Tracy Woo of Forrester notes, “the business impact of FinOps is constrained today by toilsome, episodic human activity.”

This reliance on manual workflows and processes – whether in governance policy implementation, optimization, or reporting – severely reduces speed to action and diminishes the value of any insights uncovered. As Woo states, “every second that passes from insight to action diminishes relevance and value.”

Potential technology solutions:

By leveraging automation, organizations can accelerate the impact of FinOps, translating insights into outcomes within seconds rather than months.

Challenge #2: Limited Data

Another clear gap limiting FinOps success is data, with analysis and recommendations often constrained to the narrow domain of cloud infrastructure spend. Without incorporating broader data sources such as on-premises, SaaS, and custom business metrics, the insights are restricted.

“Finops has been starved of much of the data dimensionality necessary to deliver intelligent insight and action,” says Kyle Campos, CPO of CloudBolt. “Bespoke, isolated data streams continue to generate naive recommendations with low business impact.”

Potential technology solutions:

With a complete picture of spend and utilization across all IT domains, FinOps can evolve from reactive cost management to proactive value optimization.

Challenge #3: Misaligned Goals Across Teams

Finally, organizational misalignments between finance and engineering teams continue to limit FinOps success. Without shared incentives or common data-driven risk models, the impact of FinOps rarely extends beyond finance.

“There is a lack of shared language and incentives between finance and engineering,” Woo explains. “They are still locally optimizing within their own context rather than facilitating outcomes, leading to tension and poor adoption.”

Potential technology solutions:

Creating joint understanding and ownership of cloud investments powered by enabling technologies can transform this dynamic to maximize results.

Conclusion

As the cloud becomes increasingly vital to IT strategies, the imperative to demonstrate value rises. We have highlighted three key problem areas constraining higher impact FinOps in most organizations today – manual processes, limited data, and misaligned teams. However, by taking an augmented approach leveraging automation, AI, and unified data to extend FinOps capabilities enterprise-wide, these barriers can be overcome to realize greater ROI.

Ready to learn more?

See CloudBolt in action.

Solve your ROI problem

grid pattern

Near the end of every year, everyone likes to look ahead and imagine how life might change. It’s fun and it allows us to write about what we want! 

So Forrester gives us their predictive takes (Predictions 2023) on what’s coming in 2023 and I thought, why not weigh in on those that affect our world of multi-cloud infrastructure?

CloudBolt could NOT agree more! Being pragmatic is a requirement for transitioning from the ‘great resignation’ into a possible recession and a period of ‘do more with less’. Automation ensures greater coverage, efficiency, and productivity. One of CloudBolt’s strengths is its Automation/ Orchestration capabilities around the delivery, usage, and reporting of resources and applications. Every organization has multiple automation scripts and snippets, but they’re often trapped within organizational silos. CloudBolt can tap into that well and bring order to automation chaos. Speed to market and new innovations with fewer people is the goal! Another angle to automation is security and governance. Through automation, you can dictate safe, standardized, and predictable behavior! 

Kubernetes use and investment will continue to rise and SHOULD, but VM-based workloads are not going to be totally replaced in 2023. A transition period has begun around DevOps, CloudOps, ITOps and it centers around delivering resources faster, more consistently, and in a governed and well-tracked manner. It could be K8s,  Internal Developer Portal (IDP), or call it ‘Platform Engineering’. They’re all poking at the same issue -to more efficiently deliver innovative solutions to problems (customer or internal). With an API-based open architecture, CloudBolt supports containerization today, and can easily adopt the next tool to aid DevOps, FinOps, or SecOps that is right around the corner! 

Forrester claims “an average of 200,000 open tech jobs that cannot be filled due to lack of suitable candidates.” I think automation and orchestration play another HUGE part here too! If I can automate the tasks of 3 people into 1, then I don’t need as many people. “But what about the skills?” Build some of that into the process. If this happens, then do this….essence of automation! Part of the skills gap problem is that we are constantly changing seeking new and innovative ways to deliver. But by doing so, we CREATE a skills gap. CloudBolt address by building expertise into the workload/container. We build Terraform workloads, Kubernetes workloads for example that don’t require users to KNOW that tool, they just click the button and it works!

The 2021 Verizon Data Breach Report reports miscellaneous errors were responsible for almost 20% of all data breaches. Much more eye-opening is this finding from a Gartner Survey that found misconfigurations cause 80% of all data security breaches! IT’S A PROBLEM and scandals caused by cyber-attacks erode customer trust.  

WHAT CAN YOU DO ABOUT IT? 

Of course, you can invest in tools to monitor for bad behavior, alert to its bad presence, and protect credentials but reducing application and resource misconfigurations and human errors address these hidden vulnerabilities. CloudBolt enables you to offer workloads that orchestrate approved and tested processes and resources. Those procedures are governed by policy to ensure they execute safely and consistently every time, without thought from users. 

We regularly hear our customers say they were rapidly able to reallocate 20-30% of their staff to other high-value projects or that we gave them back the equivalent of one full workday per week! 

EVERY new year brings uncertainty. You know the problems and issues that plague your organization, and some, like those above, are likely to be written about.. But regardless of today’s and tomorrow’s problems, the tools of automation, easy adoption (APIs), and tracking become essential. CloudBolt is focused on addressing the operational complexities introduced in a multi- and hybrid-cloud environment. How can we help you?

Cloud Management Platforms (CMPs) are largely sold to IT and IT Operations teams to help deliver infrastructure to the business more efficiently. That includes self-service automation to developer users across a mix of on-prem, private, and public clouds (hybrid multi-cloud). A new term is emerging with striking similarity, it’s called Internal Developer Portal/Platform (I’ve seen it both ways). The “IDP” acronym causes issues with a more popular term of the same three letters – Identity Provider. 

Internal Developer Platforms claim to automate the app delivery process from a serve-yourself portal. Security is built-in so developers don’t have to remember. Guardrails keep bad behavior out and APIs enable choice and flexibility among favorite tools (Terraform, OpenShift, Ansible, Kubernetes, etc.). Ummm… this sounds a lot like CMPs and their self-service portals with automation, governance, and out-of-the-box integrations. 

ALWAYS consider these KEY audiences when managing hybrid multi-cloud infrastructure 

Cloud Management Platforms were touted to bring order to cloud infrastructure chaos and they did, to a certain extent. We got self-service portals and infrastructure was served up much faster but you had to bring some expertise with you to get the speed. You had to enter certain values or point to particular clusters and remember to add the security parameters in and ensure everything was tagged, and… So it was faster, better than waiting 3 weeks! But not quite ideal. 

There are four key audiences involved in infrastructure use, each with a different goal: 

All four audiences should be involved in a self-service solution or you run the risk of silos, poor solution choices, and getting too many resources tied up in going different directions. If you attack the problems as a cohesive whole, then you have much greater chances of success. Next, let’s look into the capabilities that matter! 

Call it “whatever” – critical to managing hybrid multi-cloud infrastructure 

Summary 

At the end of the day, vendors and analysts can play around with names, meanings, and endless acronyms but if you involve key stakeholders and solve your collective problems, it doesn’t matter what you call it… it will be successful! 

And when you’re talking about customers, security, speed, and automation, done correctly can be the difference between thriving and just surviving in today’s uncertain economic times. When seeking solutions, look for flexibility & extensibility, the next open-source super-solution is right around the corner and you WILL WANT to take advantage. Make sure your framework choice doesn’t force you down a particular path. Another piece of advice, get a cost management/cost visibility solution in place WHEN you deploy self-service automation, if you don’t, you’re inviting trouble. Self-service ease with no EYES on costs is a disaster waiting to happen….think the finance and accounting people reading this just gasped… a little. 

Businesses are adopting software at a faster rate than ever before. Many companies are adopting cloud/SaaS apps for things like Sales Force Automation or Cloud Cost Management at unprecedented rates. But just because you’ve implemented a new piece of software doesn’t mean everyone will use it. Even if that application can save time, improve processes, or help users be more productive, people are resistant to change. It pays dividends later to have a plan early to ensure adoption when seeking new software. 

“78% of mobile apps are abandoned after only a single use, and web applications and software don’t fare much better”

Doesn’t matter how fast you implement software if no one uses it 

This may sound obvious, but it’s an important point. So much time and effort are focused on going faster, sometimes we miss the point. Speed is great but racing to a finish line where no one uses it is simply a waste! Glad we got that out of the way. Now let’s look at some of the factors that make software adoption more important than deployment. 

Software adoption is a process of change — organizational & personal  

In order to achieve full adoption, it’s important to consider the entire organization that will be using it. For example, if you buy a self-service automation solution to serve infrastructure to engineering teams and they don’t use it, it doesn’t matter if it’s easier, faster, and more secure. 

When implementing any new system, there are many stakeholders who need to be involved in the decision-making process (and ideally will also feel ownership over its success). The trick is getting these various groups on board with the idea of adopting your new solution and making sure they know what they’ll need from it once it’s up and running. In the case of cloud operations management, you have an IT audience, an Engineering audience, a Finance audience, and often a Security audience. A successful project starts with pulling together key stakeholders and understanding their needs. 

Adoption starts when evaluating new software 

The best time to start talking about adoption is when you’re evaluating new software systems. 

The process of adopting a new product or system can be broken down into three phases: 

Think early about use in a daily workflow. Do you use incentives? Do you get leaders to adopt 1st and “show” others the benefits? Do you use negative consequences? Understand your audience’s daily workflow and show how/where new software improves it with minimal effort. 

THE KEY – A comprehensive plan  

The plan needs to include key applicable parties. It’s not enough for your technology team to create or buy new software, but also for everyone involved in the project to understand how it will be used and how they will benefit. You can’t just develop or buy an amazing capability, then expect people within your organization to adopt it on their own; you need everyone at every level of your company on-board with your vision and mission before major changes are made. In my example above, this would include ITOps, DevOps, FinOps, SecOps. 

The best way I’ve found for achieving this is by creating an end-to-end strategy: one that includes all aspects of the project’s lifecycle – from initial requirements gathering through post-production maintenance – and making sure each applicable team understands how they fit into this process. This way if someone has questions about something they’re working on, they know who should help them out with particular issues because we’ve already discussed what needs to be done beforehand. It also ensures we don’t miss anything important when building out new processes or approaches because we’ve covered everything during the planning stage. Cannot stress enough how feeling like part of the answer early is a huge help for adoption later. 

Don’t get too caught up in demoing features or checking off list items 

Instead, focus on the user(s). Help them think about how this software is going to improve their lives and compare it to what they do today. You’ll find that users will be more engaged with your ‘product’ when they recognize its value early on. Show them why they need it. Show them what’s needed to get up and running. Speak in the language of your users. If they call something “Blue” when everyone else calls it “Red”, use their terms. And that’s when your team will have an easier time delivering something impactful!  

Using my example above, this could be showing the development team how much time you can save by automating delivery. Or could be showing the finance team you can tag, track, and accurately report on resource use to properly chargeback and plan. Or could be showing the security team how you can prevent bad behavior from ever happening in the first place. 

Think about users’ needs and goals first, then work backward 

We often talk about the importance of understanding user needs, goals, and business requirements before delivering software. But we don’t often talk about how important it is to understand your software architecture for the same reason: because it’s hard to iterate on a system without knowing what that system is going to do first. 

To make sure of an optimal design, there are several steps to follow: 

Conclusion 

If you’re planning a software rollout, chances are you’re thinking about adoption. That’s good! But remember that adoption is just one part of the process. To make sure everyone on your team is on board with your plans and prepared for change, you need to start by understanding their goals—and making sure they understand yours as well. Do this early, it will pay dividends later.

See how CloudBolt can help you optimize your cloud fabric.

Developers typically are technology junkies. By nature, they love to explore the latest, greatest tools and try out new capabilities. As a result, organizations end up having hundreds, if not thousands, of developers out doing their own thing – downloading open-source and freeware tools and intermingling them into processes along with technology provided by the organization – to innovate and drive new ways to solve customer problems.  

This can cause issues including: 

While devs explore new tools in pursuit of finding ways to be faster, better, and more innovative, the reality is that the opposite often occurs instead. 

Challenges of rogue tool adoption 

One of our prospects described their developer community as “6,000 different snowflakes”, each with perceived unique needs and siloed from one another. This is not collaborative. This is not efficient. If 6,000 different people are using Terraform, chances are they could be using it 6,000 different ways – and some of those WILL be better than others, but they’ll never know because they aren’t sharing what works best.  

Furthermore, no one learns technology through osmosis; there is a learning curve during which they are not providing value because they’re learning. Each person is constrained by their individual level of proficiency and their ability to use the tool. How much productive time is lost in learning new tools every week, month, or year? 

How do you enable devs to use whatever tools they want while still complying with governance? How do you get them to build security into their applications and processes? How do you get them to follow procedures outlined for everyone’s best interests (efficiency & risk)? How do ensure a majority use IT-sanctioned and approved resources? 

THE ANSWER: AUTOMATION 

Ways to improve the situation 

Reduce the learning curve by automating steps. For instance, build a Terraform plan with the required infrastructure calls already built in. Do this for Ansible plans, Chef recipes, any tool! Doing so also reduces the skill and learning curve required. Devs don’t have to know the underlying platform, they simply execute on the infrastructure options provided. Allowing devs to choose the right tool for the job is good enough, but it’s better to ensure everyone can use them in a standardized, secure, and optimized way. 

Latest estimates show that devs are spending anywhere from 19-26% of their time building and maintaining their own environments so they can do their jobs! How much faster could you propel your business forward if developers got back 8 hours or more per week to innovate versus having to learn tools or provision resources? Building automation, security and compliance into a workload is ideal and ensures devs use approved and protected workloads consistently across varying platforms and clouds. Automation allows you to build it all in for them to make it a “one-click” order. By doing so, you can abstract away complexities like networking, storage, environmental constraints, security footprint, power management, and more. 

The challenge is real and complex, but the answer is straightforward. Organizations must accept that Devs will continue to explore and use the latest tools (especially open source). But to be productive, responsible, and efficient, companies need to ensure that automated guardrails exist to remove complexity, ensure optimization, and eliminate risks. If you’re looking for the easiest and most comprehensive way to accomplish that in your organization, CloudBolt is here to help!

Ready to see it for yourself? Request a demo!

To read Part 1, click here

You can only track what you know and tag – Cost Management Only As Good As Your Cloud Management 

No one is able to gain visibility into “shadow IT” – but when you provide infrastructure in minutes vs days or weeks, devs are more likely to use IT-provided resources vs going rogue and provisioning public cloud resources on their own. Cost management solutions are only as good as YOUR tagging strategy. By that I mean, if you do not tag regularly and consistently, your cost solution can only “see” what it IS ABLE TO look for; it uses tags to track and categorize. Cost solutions don’t provision and tag resources. Trying to do it manually is inconsistent at best – and inconsistency is as good as not doing tagging at all. Without diligent tagging strategy and execution, it doesn’t matter WHAT cost solution you choose; it can only do its job if it’s aware of the resources.  

Peer SurveyOnly 9% of respondents said they “always employ tagging” – 73% said they “sometimes used tagging” 

“Can’t optimize one without the other…” – Cloud & Cost Management TOGETHER 

Why have so few vendors offered these two pieces together? They are integral and vital parts of increasingly complicated multi-cloud operations. Some hypothesize that NetApp can get there with its CloudCheckr acquisition, but they lack true cloud management & automation. VMware has the pieces but their approaches are VMware-centric, complicated, and rigid (plus, being bought by Broadcom has cast a shadow over near-term and long-term innovation). That leaves CloudBolt, which has embraced this cost & automation tandem for years.  

We believe in the importance of having something in place to regularly and automatically discover workloads (ones that may have slipped through the cracks). That, combined with a solid tagging strategy, are pre-requisite musts. A cost management solution without automated enforcement to ensure anomalous spend not only is remediated but doesn’t happen again is like solving a problem incompletely – which is like not solving the problem at all. (To Ponder: If cost management solution only shows/ tracks/optimizes 55% of your total cloud spend, is it worth the annual subscription fee? Could you be gaining so much more if discovery and tagging were already observed best practices?)  

Multi-cloud visibility is fuzzy – Get the picture you want 

All the cost management solutions have similar AWS capabilities, but where they break down is multi-cloud. They were developed in a prior era, one where there was only one public cloud that mattered. But now Azure is nearly as popular as Amazon, even GCP is rising in popularity. With 92 percent of organizations having a multi-cloud strategy in place or underway, being good at just AWS isn’t good enough anymore.  

Seek out solutions with good Azure capabilities (or GCP if that is your primary or secondary option). Ensure your cost solution gives you an overall view across clouds; most today require different screens and show different information making it infinitely more difficult to compare, contrast, and optimize. Ensure flexibility in reporting. Inevitably, key stakeholders within the organization are going to want to see data and reports in new and particular ways. Reporting flexibility goes a long way after initial implementation.  

Automate & Orchestrate for higher levels of security and efficiency 

To reiterate, cost management solutions are not tagging solutions. They are not infrastructure provisioning, automation, and management tools either. But when used properly in combination, they become a powerful weapon. The power comes from continually identifying anomalies and bad behavior with a cost solution and then turning around and automating a process that ensures that particular anomaly or behavior doesn’t happen again with an infrastructure management solution. Common examples of this can include:   

  1. Workloads left idle when done  
  2. Ordering resources that are over-powered for the desired task 
  3. Forgetting to power down compute over weekends/during off hours 
  4. Choosing an over-priced option when cheaper & better alternatives are available 
  5. Over-provisioning reserve instances for “savings” 

Cost solutions typically only make you aware of the issues! While the first step is always identification of anomalies, the ability to ensure it doesn’t happen again is a comparative superpower. Governance to help humans be less error-prone, less forgetful, and less wasteful is essential to the next phase of Cloud.  

Bottom line, EVERYONE is suffering from a labor shortage and a skills gap… you simply cannot keep throwing people at the problems. It’s error-prone (which causes even further problems), expensive, time-consuming, and doesn’t scale. 

Pairing a multi-cloud cost management solution with a multi-cloud cloud management solution has become imperative. 

Tips & Tricks: Your solution(s) search 

Good Capabilities Across Major Clouds and vCenter 

If you’re not multi-cloud today, you will be soon. Not all cost management tools have solid features across the major clouds and on-premise vCenter. Today, having advanced capabilities on AWS is table stakes. Azure is fast becoming a real competitor to AWS yet most cost management vendors have only rudimentary capabilities. GCP is even worse. 

Show Me Multi-Cloud Views 

Seek vendors with multi-cloud views. Many cost management vendors require you to hop between screens to “see” your multi-cloud spend. It’s annoying and eats up time. Why not show it all in a single dashboard? Similarly, seek vendors with flexible reporting. Requirements WILL change and people WILL request variations… be ready! 

Pair Cloud Management with Cost Management 

Ensure you have a good provisioning and automation solution that spans multiple clouds and on-prem. (If no on-prem, no problem, but nearly every enterprise has some on-premise infrastructure). This ensures you can automate away bad behavior from happening in the first place, which enables infrastructure’s to deliver  continuous improvement. Ensure your provisioning/automation tool can tag resources and users. No tags = no tracking = no reports.  

It was easier in the past… 

Infrastructure delivery 10 years ago looked a lot different than today (heck, it likely looks a lot different today than just a couple of years ago). On-prem was easy to track & budget, and as we introduced elastic cloud computing, we could handle tracking infrastructure usage, reporting, showing/chargeback, and improving processes. 

Even as we adopted more advanced technology, integrated more systems, and automated more processes while using more and more single cloud services, things for the most part stayed under control. We hired people. We used native tools. We bought business reporting & intelligence tools to give us new insights.  

But then…

Adding more & more over time weighs the organization down… 

Over time we kept adding more cloud services, more new technologies, more new tools, and added a 2nd or even 3rd public cloud to the mix, all thinking we were moving faster, figuring out a better way. But doing this eventually creates exponentially higher degrees of complexity that break 1st Generation tools. (See RESEARCH SHOWS – 79% Are Hitting a Wall Using Existing Tools & Platforms). 1st Gen Cloud Management Platforms (CMPs) and Cost Management & Optimization tools simply fall short in this more complex multi-cloud, multi-tool world we now live in.

We try to mask issues by hiring more people (analysts & admins) to cover more significant and greater complexity caused by more usage, more clouds, more tools, ….just more of everything!!! Some call this Technical Debt. Call it what you want but it’s the boat most are in today.

Not enough people & time… 

Many of you have gotten to this point: 

The lift is too great to ask people to keep writing endless scripts, aggregating data in spreadsheets, and manually generating usage reports & areas to save.

New approach needed – We live in a new world with new problems and challenges… 

It’s time to redefine what MORE means… 

The new cloud world requires new tools. Stop trying to fit a square peg in a round hole, seek solutions that bring more integration capabilities (so they can work with existing tools & automate larger activities), more flexibility to approach issues in different ways (more easily adapt to YOUR process than vendor’s), and more current and differentiated capabilities. For example: 

A decade in the grand scheme of things is not a long period of time. But in cloud computing, it’s an eternity. Companies need to resist the pull of inertia that has many accepting the limitations of technologies (and resulting processes) that were never designed to solve the multi-layered cloud complexities of today. The future belongs to the flexible – flexibility to integrate, automate and orchestrate while optimizing costs and ensuring governance. In the near future, there will be a stark difference between the enterprises and service providers who make the pivot rapidly and those who don’t.  

The only question left is which do you want to be? 

See how CloudBolt can help you maximize your cloud fabric.