Deploying a Kubespray cluster to OpenStack using Terraform
Last updated
Was this helpful?
Last updated
Was this helpful?
Kubespray is a community-driven project that provides a set of Ansible playbooks to deploy a production-ready Kubernetes cluster. It is a great tool to deploy a Kubernetes cluster on OpenStack! This guide will detail using Terraform to automate creation of your OpenStack infrastructure and Ansible to deploy a Kubespray cluster on it.
We'll be using the following official Kubespray documentation as a reference:
Support for most popular network plugins (Calico, Cilium, Contiv, Flannel, Multus, Weave, Kube-router, Romana, Amazon VPC CNI, etc.)
Support for most popular Linux distributions
Upgrade support from a previous Kubernetes version
Composable attributes
Declarative way to customize cluster configuration through a configuration file
Network load balancer (MetalLB) for services of type LoadBalancer
Configurable bootstrap tools for the Kubernetes cluster
Multi-purpose bootstrap node used as a bastion (optional)
GPU node support
An OpenStack instance. If you don't have OpenStack, you can sign up for a free trial today with OpenMetal .
We'll be performing this deployment from a VM running Ubuntu 20.04. You can also use one of your OpenMetal cloud core nodes or your work station. Our guide will have you install Terraform and Ansible in your installation environment.
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform supports existing, popular service providers as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire data center.
Terraform generates an execution plan describing what is needed to reach the desired state, then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied. This allows for high fidelity plans and helps reduce out-of-band changes, which can lead to drift and conflicts.
For non Debian based systems, please see the .
Ensure the required Python modules are installed.
Create your virtual environment.
Activate the environment.
Update pip
We'll be creating a project to deploy our infrastructure into. You can use an existing project if you have one.
Note: You can substitute the admin user if you already have your own user.
If you have not already done so, download your openrc.sh
file from your projects "API Access" menu. Save the OpenStack RC file to your workspace and source it.
This is an important step as it sets the environment variables Terraform uses to authenticate with OpenStack. Double check that these values are correct.
openrc.sh
The Kubespray repository contains the Ansible playbooks and Terraform templates we'll be using. Pull them down now with git
:
Install Ansible and other requirements with pip
.
For this example, we'll be using the following variables:
Note: We've added comments to help you fetch the values you want to replace from OpenStack.
Note: Run these commands from the kubespray/inventory/test-cluster directory.
You'll be prompted to confirm your changes to OpenStack, type yes
to continue. Once the process completes, the infrastructure required to deploy Kubernetes will be available in your OpenStack project.
Note: If you want to destroy your resources, you can run the following command:
The Terraform run created your nodes and an Ansible inventory file. Next prepare the Ansible variables.
These are the options we updated to deploy the cluster with the OpenStack Cloud Provider, Cinder CSI, and support for Octavia load balancers.
You are ready to deploy Kubernetes. The following command needs to be run from the kubespray
directory. The process will take some time to complete and depends on the number of resources you wish to deploy. In our example, it took about 12 minutes.
If you followed along with the guide, you have a bastion node you can use to access your cluster. If you don't have a bastion node, you can skip this step.
To create the configuration file used to authenticate with the cluster, several certificates must be copied from the master node. Replace <master_ip>
with the IP address of your master node:
Output:
You should now have a working configuration file. Save this in a safe place to access your cluster from a machine that can reach your master node.
By enabling the OpenStack Cloud Provider, Kubespray configured a few pods that should now be in the running state.
The OpenStack Cloud Provider supports Octavia load balancers. Verify the load balancer is working by creating a service of type LoadBalancer
. Once you create the service, you should see a new load balancer created in the OpenStack dashboard.
Create a service with the following command:
You can verify that the load balancer was created by running the following command:
You should also see a floating IP associated with the load balancer service in Kubernetes. This may take a couple of minutes to complete:
Output:
Next we'll verify that Cinder volumes are working. First, create a storage class:
Now create a PersistentVolumeClaim by running the following command:
We'll deploy a Redis instance configured to use the volume we created in the previous step.
Warning: This is just an example. Do not use this in production.
Output:
You should now have a working Kubernetes cluster with the OpenStack Cloud Provider enabled. You can now deploy your applications to the cluster.
We'll be using the CLI to help populate our Terraform variables. If you don't have access to the OpenStack CLI, please follow the steps in this guide:
The previous commands generated a few files including one named cluster.tfvars
. This file will be used to configure the nodes and networks for your cluster. Refer to documentation for a full list of variables.
We provide here a simplified example configuration, it is likely you will want to configure more options than we've provided when setting up your cluster. For a full list of options, refer to the .