Terraform OpenStack: Automating Application Server Scaling and Management
The technology landscape is constantly evolving. Organizations are going through digital transformations and building cloud-native applications while using the power and capabilities of DevOps and CI/CD to accelerate the deployments of higher-quality software.
Infrastructure management and operations are also going through a paradigm shift. The industry is moving from manual server provisioning, laborious configuration changes, and several hours spent on verifications and compliance checks toward self-service deployments and automated security checks.
The following article explores the steps underway to reshape our digital infrastructure and harness the power of infrastructure as code (IaC). To understand this better, we will go through a real-world example of deploying an application server on the OpenStack cloud. In the end, we discuss some of the recommended best practices to ensure that the transformation is robust and efficient.
Summary of key Terraform OpenStack recommendations
Terraform, OpenStack, and IaC are powerful tools for managing infrastructure, but they can be complex and challenging to use. Here is a summary of some recommended practices to help you improve your infrastructure management using these tools.
Best Practice | Purpose |
---|---|
Implement change management, version control, and an approval process | Utilize GitOps for infrastructure management to ensure that changes are made in a controlled and auditable manner. |
Define guardrails and security checks | Define policies and controls for acceptable security practices and prevent deviations from expected behavior. |
Use a secrets management tool | Ensure the secure storage, access, and management of sensitive data through the use of a secrets management tool. |
Take a modular approach and delegate components | Utilize reusable code components for infrastructure provisioning and management. |
Problem statement: a manual process of taking an application live
Suppose that your technology services company has developed an in-house CRM application to cater to your customers. The application must be deployed through a highly available topology that is resilient to failures and can handle significant user traffic load.
The development team hands off requirements to the infrastructure operations team: The needs include two load balancers, four web servers, and a three-server database cluster. Concurrently, the cybersecurity team sets policies, guidelines, and security checklists to ensure compliance with privacy laws.
In traditional IT operations, it would take multiple days of work for the infrastructure team to deploy the required servers, create the necessary network topology, and allocate storage. The infrastructure team would also have to follow security checklists for configuring the appropriate firewall rules, server hardening, etc.
Any change in the structure of the CRM application, spikes in end-user traffic, or the requirement to build a disaster recovery site would trigger a process of (re)deployment, following checklists, and so on. This could also span multiple days (and maybe even weeks) to bring the required changes to production.
In our modern, fast-paced world, lengthy delays of this sort are untenable—the market expects faster response times. In the following sections, we explore strategies for ensuring operational efficiency without compromising infrastructure stability and security.
What is infrastructure as code (IaC)?
The deployment, management, operation, and expansion of infrastructure come with multiple challenges. You have to maintain agility, ensure stability and performance, conform to security standards, and document changes for auditing and compliance.
To address these challenges, implement best practices for software development and also follow the infrastructure as code (IaC) model to define the desired infrastructure state as software code. In the IaC model, we manage infrastructure operations using code instead of following manual or interactive processes of provisioning and configuration changes, as discussed above. We define the state of our infrastructure (i.e., compute, network, storage, and application configurations) in a descriptive language model, and the software then implements the required steps to achieve the desired state.
The IaC model yields many benefits:
- Code helps us bring automation to our workflows. We can use infrastructure management tools like Terraform (discussed next) and Ansible to execute the instructions.
- This process of infrastructure automation in operations and management reduces human error and increases operational efficiency and scalability.
- We can include checklists in our automated workflows to implement security standards and best practices, which become inherent parts of our infrastructure deployment and operations.
- We can use a version control repository, which allows us to track changes, implement version control, and perform rollbacks.
What is Terraform?
When you have multiple teams managing the infrastructure, there is always a challenge associated with maintaining the correct state of information. Terraform, developed by HashiCorp, offers the core features of IaC by becoming the definitive reference for your infrastructure configuration. As a single source of truth, the IaC utility tool allows you to build and manage a standard operating environment across diverse infrastructures. The tool also offers the ability to manage compute, network, storage, and security ACLs (and more) on both public and private cloud infrastructure.
Terraform comes with extensive integration options for various providers, including AWS, Azure, Alibaba Cloud, GCP, OCI, Kubernetes, and many more. A comprehensive list of supported providers and guidelines is available on the Terraform registry website.
“Suddenly, I can offer an engineer productivity! Where it used to take them roughly 40 hours to build up a system to overlay their tools, I deliver all of that in minutes with CloudBolt.”
Terraform versions
Terraform is available in three versions:
- An open-source CLI utility with a Business Source License (BSL v1.1).
- A SaaS cloud platform, which starts free with smaller resources and has paid plans for larger organizations.
- An Enterprise version, where Terraform Cloud can be deployed as a private instance for advanced security and compliance requirements.
Terraform workflow: Write, plan, apply
Terraform allows you to define the desired state of your infrastructure in a declarative language. Instead of writing the steps to deploy an infrastructure, you define your intent in the HashiCorp configuration language (HCL) format.
For your CRM app, say you want a Debian VM in AWS’s us-west-2 region, configured with two vCPUs and 4 GB RAM. Your HCL code would look like this:
provider "aws" { region = "us-west-2" } resource "aws_instance" "debian_vm" { ami = "ami-XXXXXXXXXXXXXXXX" # Debian image ID instance_type = "t2.medium" tags = { Name = "AppServerInstance" } }
Once this is fed as input to Terraform, it generates an execution plan based on the definitions. The plan essentially describes how Terraform is going to make the required changes to the required infrastructure. Based on the provided input, Terraform can commission new resources, update existing ones, or destroy parts or all of the infrastructure.
The next step is to apply the created plan. Terraform will determine the required sequence of steps to create, update, or destroy relevant infrastructure components.
What is OpenStack?
Cloud computing offers the flexibility to provision infrastructure components like compute, network, and storage on-demand without getting into the complexities of managing actual physical components like servers, switches, storage arrays, and firewalls. Public cloud providers offer benefits like flexible resource provisioning with economies of scale but may not fit all use cases.
Why on-prem infrastructure?
Some businesses need more control. This may mean having predictable capacity requirements and greater control over the infrastructure. For security and compliance requirements, finer physical control and access to your data is almost a must-have.
An organization may also want to cut down on external dependencies and ensure the lowest latency for its applications. Building a private cloud infrastructure is an option, but only when you have the required IT staff to manage your own data centers.
“Developers are overwhelmed by the amount of security configurations that are needed to secure the cloud…they no longer have to be security experts or worry about creating vulnerabilities for the organization.”
OpenStack: open source cloud computing
OpenStack is a popular open-source cloud computing platform with significant community support. OpenStack allows you to build a hyper-converged computing, network, and storage infrastructure using commercial off-the-shelf servers and networking hardware. With OpenStack, you can achieve similar capabilities as public cloud offerings, like flexible provisioning and a self-service dashboard for your internal teams. The platform also provides API interfaces that you can use for integration into your automated workflows.
Deployment on OpenStack using Terraform
Terraform offers the capability to deploy and manage the required resources on the OpenStack platform. You can utilize Terraform to manage compute resources (instances), network resources (network, subnet, floating IP, and security groups), and storage resources (volume and block storage). The details of the configuration options and resources are available on the OpenStack provider page.
Create an instance on OpenStack Cloud
The following steps assume that you already have an OpenStack cloud deployment and have the required credentials for a project/tenant with the appropriate privileges to create the cloud resources.
For the purposes of this demo:
- Deploy a front-end web server for the CRM application.
- Provision the AlmaLinux 9 instance on OpenStack, connect the instance with a specific network, attach a floating IP, and protect the instance via a specific security group.
- Use an Ubuntu Linux client machine to perform the following steps (although the same can be done on Windows or MacOS by installing the required prerequisite programs).
Prerequisite: Install Terraform
Follow the steps provided in the Terraform Installation tutorial. After installation, verify that the CLI is installed properly and check the available command line options.
provider "aws" { region = "us-west-2" } resource "aws_instance" "debian_vm" { ami = "ami-XXXXXXXXXXXXXXXX" # Debian image ID instance_type = "t2.medium" tags = { Name = "AppServerInstance" } }
Prerequisite: Install OpenStack client
The instructions for installing the OpenStack CLI client are available in the OpenStack documentation for different platforms. Most Linux distributions include the client in their repositories. For Ubuntu, the CLI client can be installed using the apt package manager, as follows.
$ sudo apt install python3-openstackclient
The OpenStack client requires the authentication URL of the cloud with appropriate credentials to connect with OpenStack. Create a YAML configuration file in your home directory called ~/.config/openstack/clouds.yaml. You can provide configurations for connecting multiple OpenStack clouds in the file.
clouds: devstack: # Cloud Name auth: auth_url: https://keystone.domain.com:5000/v3 # API endpoint project_name: devproject # OpenStack Project name username: admin # Authentication user password: 'SECUREPASS' # User password identity_api_version: 3 region_name: RegionOne # OpenStack region
Verify the OpenStack API connectivity and your credentials by retrieving the resources from your OpenStack project.
$ openstack --os-cloud devstack image list --max-width=60 +-----------------------+------------------------+--------+ | ID | Name | Status | +-----------------------+------------------------+--------+ | 91cd8f2b-40ba-4105 | almalinux-8.6 | active | | 93e71c60-b60d-4fda | almalinux-9.0 | active | | 1a586939-eae9-453c | centos-7.9.2009 | active | | 9b8b3f5d-2b02-4c5c | debian-10.10 | active | | c81e0d8e-45be-42fc | debian-9.13 | active |
“We were surprised at how few vendors offer both comprehensive infrastructure cost management together with automation and even governance capabilities. I wanted a single solution. One vendor to work with.”
Initialize Terraform
With the prerequisites in place, you are now ready to start the deployment of your resources on the OpenStack cloud via Terraform.
Create a working directory for Terraform, which you will use to create the Terraform configuration files. Terraform downloads all the plugins required for your deployment to this working directory and stores your infrastructure’s current state here.
$ mkdir -p work/terraform $ cd work/terraform
Terraform requires its configuration files to be created with the suffix .tf. Create a Terraform main configuration file called main.tf in the working directory. This file defines your provider and the resources required to be provisioned with the cloud provider.
Terraform documentation includes help and guides for interacting with the different providers supported by Terraform. Visit the registry page that has the sample code snippets to integrate OpenStack cloud with Terraform.
Copy and paste the following into the main.tf configuration file:
# Define required providers terraform { required_version = ">= 0.14.0" required_providers { openstack = { source = "terraform-provider-openstack/openstack" version = "1.52.1" } } } # Configure the OpenStack Provider provider "openstack" { cloud = "devstack" # Cloud defined in cloud.yml file }
You need to initialize the project after defining the OpenStack provider in the configuration. The following command will prepare your working directory and download the Terraform plugin for the provider defined in the main.tf file.
~/work/terraform$ terraform init Initializing the backend... Initializing provider plugins... - Finding terraform-provider-openstack/openstack versions matching - Installing terraform-provider-openstack/openstack v1.52.1... - Installed terraform-provider-openstack/openstack v1.52.1 . . . . . . . . Terraform has been successfully initialized!
Create a virtual machine
Create a configuration file called devproject.tf and define the resources that need to be provisioned on the OpenStack cloud. The name of the file does not matter, but it needs to be descriptive for you to understand it, and it must have the suffix .tf. Define the SSH keypair to be used, the network to attach with the VM, the default security group, and a name for the instance.
# Variables variable "keypair" { type = string default = "SSH-KEY" # default ssh keypair } variable "network" { type = string default = "Web-Server-Net" # default network } # Data sources ## Get Image ID data "openstack_images_image_v2" "image" { name = "almalinux-8.6" # Name of image to be used most_recent = true } ## Get flavor id data "openstack_compute_flavor_v2" "tinyflavor" { name = "sm1.tiny" # flavor to be used } # Create an instance resource "openstack_compute_instance_v2" "webserver01" { name = "WebServer01" #Instance name image_id = data.openstack_images_image_v2.image.id flavor_id = data.openstack_compute_flavor_v2.tinyflavor.id key_pair = var.keypair security_groups = [ "default", ] network { name = var.network } }
Verify that the created configuration files do not have any errors.
~/work/terraform2$ terraform validate Success! The configuration is valid.
Now apply the created configurations. Terraform will output the required configurations that it will execute and wait for approval. After you input yes, it will start the deployment (truncated output).
~/work/terraform2$ terraform apply data.openstack_compute_flavor_v2.tinyflavor: Reading... data.openstack_images_image_v2.image: Reading... ... Terraform will perform the following actions: # openstack_compute_instance_v2.webserver01 will be created + resource "openstack_compute_instance_v2" "webserver01" { + key_pair = "JA-SSH" + name = "WebServer01" + power_state = "active" + security_groups = [ + "default", ] ... ... } Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes openstack_compute_instance_v2.webserver01: Creating... openstack_compute_instance_v2.webserver01: Still creating... Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
The last output line verifies that the apply command successfully deployed the required instance on the OpenStack cloud. You can also check the status of the instance using the OpenStack CLI. The following output lists the state of the WebServer01 instance with the boot Image and the instance flavor.
~/work/terraform$ openstack --os-cloud devstack \ server list --max-width=70 -c Name -c Status -c Image -c Flavor +---------------+---------+-------------------------+------------+ | Name | Status | Image | Flavor | +---------------+---------+-------------------------+------------+ | WebServer01 | ACTIVE | almalinux-8.6 | sm1.tiny | +---------------+---------+-------------------------+------------+
Challenges of infrastructure as code
Terraform, Ansible, and other IaC utilities solve many real-world problems faced by infrastructure and operations teams today. However, these utilities are also known to bring their own set of challenges to the mix.
One of the biggest challenges is the learning curve for the infrastructure team, which needs to learn to code in Hashicorp Language (HCL) and/or write Ansible playbooks in YAML format. Besides coding, the team also needs to learn the best practices of coding, how to use variables, modules, reusability, etc.
CloudBolt helps organizations transition to IaC more smoothly and efficiently by offering a variety of features that can help reduce the learning curve for the infrastructure team, accelerate the benefits of IaC, and manage cloud environments more effectively. Learn how CloudBolt’s self-service portal and catalog allow you to quickly request, define, and automate the deployment of complex cloud environments.
Recommendations
Implementing the IaC model solves many real-world problems in infrastructure operations and management. To get the most benefits from IaC, the following recommendations should be followed.
Implement change management, version control, and an approval process
With hundreds or thousands of infrastructure components—VMs, containers, networks, security ACLs, volume/block storage, etc.—it can be difficult to track changes without a centralized system of record.
To address this challenge, adopt the GitOps practice where the infrastructure state is documented in a centralized repository as a single source of truth. Any change must go through a change management and approval process. All the changes, no matter how big or small, should be tracked so they can be audited and rolled back if required.
There are multiple ways to implement GitOps:
- Write your infrastructure as code and use a Git repository for version control (GitHub or GitLab).
- Implement a deployment and operations pipeline (Jenkins or CircleCi).
- Use an IaC utility (Terraform or Ansible) to perform the deployments and configuration changes.
CloudBolt also provides built-in CI/CD pipelines that can be used to implement GitOps. This can simplify the GitOps setup and management process and free up your team to focus on developing and enhancing your applications.
Define guardrails and security checks
IT governance and security are a must for any decent-sized infrastructure, especially if the infrastructure is exposed over the Internet. It is recommended to define best practices and security checks that are to be followed by team members to ensure smooth and secure IT operations. Examples of some of these policies include the following:
- The infrastructure must use a long-term support (LTS) release platform.
- VMs must not have a live IP address attached unless a specific requirement exists.
- Applications must be deployed in a multi-tier topology, with application components (load balancer, middleware, database) connected with their specific networks.
- Networks must be protected via firewalls and security-approved ACLs.
These policies can be translated into guardrails defined within the workflows of IaC so that these do not get overlooked during the course of infrastructure operations. If your organization is using Terraform or Ansible, ensure that your infrastructure team creates these workflows manually and includes them as part of the provisioning process.
Alternatively, consider utilizing the capabilities of a modern cloud management platform like CloudBolt to automate compliance checks and access control. This can help reduce human errors while simplifying and accelerating infrastructure deployment and operations.
Use a secrets management tool
When you are performing deployments on the AWS Cloud, OpenStack, or any other infrastructure, you require sensitive credentials (like passwords or API tokens) to authenticate and perform required operations. Storing credentials with your IaC code is a high security risk, and you need to ensure that these credentials do not get exposed or leaked.
Using a secrets management engine instead of regular text files to store sensitive credentials of your infrastructure is highly recommended. The purpose of a secrets engine is to store sensitive information in an encrypted format.
One popular approach is to use a secrets management engine such as HashiCorp Vault, which is available as an on-premises tool (with open source and enterprise versions) or a cloud service. You can integrate your IaC utility with Vault to store and retrieve credentials with proper policy control. You can use the Hashicorp Vault secrets engine for storing credentials.
Take a modular approach and delegate components
With a large and complex infrastructure, delegating and distributing tasks is always recommended. This brings efficiency and agility to your operations. It is recommended to use a modular approach in your IaC; like functions or objects in software code, different teams can write specific modules for specific functions that other team members can reuse.
Terraform supports creating modules. Your infrastructure team can write a module to deploy an Ubuntu 22.04 OS with Nginx 1.24 and specific firewall rules. Following this, your application team can reuse this module to deploy web servers with application code.
With CloudBolt’s Blueprints, you can create blueprints for controlled and repeatable deployments. You can use a blueprint for a single VM or a complex multi-tier, multi-cloud deployment, and you can include reusable components such as a web server, database server, and caching server.
Summary of key concepts
Infrastructure as code offers a wide range of benefits, from efficiency and speed in deployments to security and compliance. With built-in, self-documented workflows, you can achieve scalability, version control, and even disaster recovery. Terraform can be used to harness all the benefits of IaC, and it can be integrated with multiple public clouds as well as on-premises infrastructure like OpenStack.
While implementing IaC, the best practices and recommendations discussed above should be followed. However, implementing Terraform as IaC still has its own set of challenges. These can be overcome using platforms like CloudBolt, which handles most of the complexities and offers the same or better capabilities.
In adopting IaC, it’s imperative to follow the outlined best practices. While Terraform brings its own set of challenges as an IaC solution, CloudBolt’s hybrid cloud management platform helps organizations manage their cloud resources across multiple providers. With a single pane of glass for provisioning, managing, and optimizing cloud resources, the platform can help you to achieve modularity in your infrastructure while reducing the complexity and cost of managing your infrastructure.
Related Blogs
The New FinOps Paradigm: Maximizing Cloud ROI
Featuring guest presenter Tracy Woo, Principal Analyst at Forrester Research In a world where 98% of enterprises are embracing FinOps,…
Ready to Run Webinar: Achieving Automation Maturity in FinOps
Automation has become essential to keeping up with today’s fast-paced cloud environment. Manual FinOps processes create bottlenecks, delay decisions, and…