How to run Terraform on DigitalOcean

0 Shares
0
0
0
0

Introduction

Terraform is a tool for creating and managing infrastructure in an organized manner. You can use it to manage DigitalOcean Droplets, Load Balancers, and even DNS entries, in addition to a wide range of services offered by other providers. Terraform uses a command-line interface and can be run from your desktop or a remote server.

Terraform works by reading configuration files that describe the components that make up your application environment or data center. Based on the configuration, it creates an executable plan that describes what to do to achieve the desired state. You then use Terraform to execute this plan to build your infrastructure. When changes occur in the configuration, Terraform can create and execute incremental plans to update the existing infrastructure to the new described state.

In this tutorial, you will install Terraform and use it to create an infrastructure on DigitalOcean that consists of two Nginx servers that are load-balanced by a DigitalOcean Load Balancer. Next, you will use Terraform to add a DNS entry on DigitalOcean that points to the Load Balancer. This will help you get started using Terraform and give you an idea of how you can use it to manage and deploy a DigitalOcean-based infrastructure that meets your needs.

Prerequisites
  • A DigitalOcean account
  • A personal DigitalOcean access code that you can create through the DigitalOcean control panel.
  • A passwordless SSH key has been added to your DigitalOcean account.
  • A personal domain pointing to a server called DigitalOcean.

Step 1 – Installing Terraform

Terraform is a command line tool that you run on a remote desktop or server. To install it, you download it and put it in your PATH so you can run it in any directory you are working in.

First, download the appropriate package for your operating system and architecture from the official downloads page. If you are using macOS or Linux, you can download Terraform with curl.

On macOS, use this command to download Terraform and place it in your home directory:

curl -o ~/terraform.zip https://releases.hashicorp.com/terraform/1.7.2/terraform_1.7.2_darwin_amd64.zip

In Linux, use this command:

curl -o ~/terraform.zip https://releases.hashicorp.com/terraform/1.7.2/terraform_1.7.2_linux_amd64.zip

Create the ~/opt/terraform folder:

mkdir -p ~/opt/terraform

Then unzip Terraform to ~/opt/terraform using the unzip command. On Ubuntu, you can install unzip using apt:

sudo apt install unzip

Use it to extract the downloaded archive to the ~/opt/terraform folder by running:

unzip ~/terraform.zip -d ~/opt/terraform

Finally, add ~/opt/terraform to your PATH environment variable so that you can run the terraform command without specifying the full path to the executable file.

On Linux, you need to define PATH in bashrc. , which is executed when a new shell is opened. Open it for editing by running the following:

nano ~/.bashrc

To add the Terraform path to PATH, add the following line at the end of the file:

export PATH=$PATH:~/opt/terraform

When finished, save and close the file.

Now all your new shell sessions can find the terraform command. To load the new PATH into your current session, if you are using Bash on a Linux system, run the following command:

. ~/.bashrc

If you're using Bash on macOS, run this command instead:

. .bash_profile

If you are using ZSH, run this command:

. .zshrc

To verify that you have installed Terraform correctly, run the terraform command without arguments:

terraform

You will see output similar to the following:

OutputUsage: terraform [global options] <subcommand> [args]
The available commands for execution are listed below.
The primary workflow commands are given first, followed by
less common or more advanced commands.
Main commands:
init Prepare your working directory for other commands
validate Check whether the configuration is valid
plan Show changes required by the current configuration
apply Create or update infrastructure
destroy Destroy previously-created infrastructure
All other commands:
console Try Terraform expressions at an interactive command prompt
fmt Reformat your configuration in the standard style
force-unlock Release a stuck lock on the current workspace
get Install or upgrade remote Terraform modules
graph Generate a Graphviz graph of the steps in an operation
import Associate existing infrastructure with a Terraform resource
login Obtain and save credentials for a remote host
logout Remove locally-stored credentials for a remote host
output Show output values from your root module
providers Show the providers required for this configuration
refresh Update the state to match remote systems
show Show the current state or a saved plan
state Advanced state management
taint Mark a resource instance as not fully functional
test Experimental support for module integration testing
untaint Remove the 'tainted' state from a resource instance
version Show the current Terraform version
workspace Workspace management
Global options (use these before the subcommand, if any):
-chdir=DIR Switch to a different working directory before executing the
given subcommand.
-help Show this help output, or the help for a specified subcommand.
-version An alias for the "version" subcommand.

These are the commands that Terraform accepts. The output will give you a brief explanation, and you will learn more about them throughout this tutorial.

Now that Terraform is installed, let's configure it to work with DigitalOcean resources.

Step 2 – Configure Terraform for DigitalOcean

Terraform supports a variety of service providers that you can install. Each provider has its own specifications that generally expose its corresponding service provider API.

The DigitalOcean provider allows Terraform to interact with the DigitalOcean API to create infrastructure. This provider supports the creation of various DigitalOcean resources, including:

  • digitalocean_droplet: Droplets (servers)
  • digitalocean_loadbalancer: Load Balancer
  • digitalocean_domain: DNS domain entries
  • digitalocean_record: DNS records

Terraform uses your DigitalOcean personal access key to communicate with the DigitalOcean API and manage resources in your account. Do not share this key with others and keep it out of script and version control. Export your DigitalOcean personal access key to an environment variable named DO_PAT by running the following:

export DO_PAT="your_personal_access_token"

Create a directory that will store your infrastructure configuration by running the following command:

mkdir ~/loadbalance

Go to the newly created directory:

cd ~/loadbalance

Terraform configurations are text files that end with the .tf file extension. They are human-readable and support comments. (Terraform also supports JSON-formatted configuration files, but they are not covered here.) Terraform reads all configuration files in your working directory declaratively, so the order of resource and variable definitions does not matter. Your entire infrastructure can exist in a single configuration file, but for clarity, you should separate configuration files by resource type.

The first step in creating infrastructure with Terraform is to define the provider you are going to use.

To use a DigitalOcean provider with Terraform, you need to tell Terraform about it and configure the plugin with the appropriate credentials variables. Create a file called provider.tf that stores the configuration for the provider:

nano provider.tf

Add the following lines to the file to tell Terraform that you want to use the DigitalOcean provider and tell Terraform where to find it:

terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}

Then, define the following variables in the file so that you can reference them in the rest of your configuration files:

  • do_token: Your personal DigitalOcean access password.
  • pvt_key: The location of the private key, so Terraform can use it to log into new Droplets and install Nginx.

You pass the values of these variables to Terraform when it runs, rather than hard-coding the values here. This makes the configuration more portable.

To define these variables, add these lines to the file:

...
variable "do_token" {}
variable "pvt_key" {}

Then, add these lines to configure the DigitalOcean provider and assign your DigitalOcean account credentials by do_token To the argument Token Specify provider:

...
provider "digitalocean" {
token = var.do_token
}

Finally, you want Terraform to automatically add your SSH key to any new droplet you create. When you added your SSH key to DigitalOcean, you gave it a name. Terraform can use this name to retrieve the public key. Add these lines, replacing the key name you provided in your DigitalOcean account. terraform Do:

...
data "digitalocean_ssh_key" "terraform" {
name = "terraform"
}

File provider.tf Your completed form will look like this:

terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
variable "do_token" {}
variable "pvt_key" {}
provider "digitalocean" {
token = var.do_token
}
data "digitalocean_ssh_key" "terraform" {
name = "terraform"
}

Initialize Terraform for your project by running:

terraform init

This will read your configuration and install the plugins for your provider. You will see that in the output:

OutputInitializing the backend...
Initializing provider plugins...
- Finding digitalocean/digitalocean versions matching "~> 2.0"...
- Installing digitalocean/digitalocean v2.34.1...
- Installed digitalocean/digitalocean v2.34.1 (signed by a HashiCorp partner, key ID F82037E524B9C0E8)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

If you get stuck and Terraform isn't working as you expect, you can fix it by deleting the file terraform.tfstate And manually remove the created resources (e.g. via Control Panel) and start again.

Terraform is now configured and can be connected to your DigitalOcean account. Next, you will use Terraform to define a droplet that will run the Nginx server.

Step 3 – Define the first Nginx server

You can use Terraform to create a DigitalOcean Droplet and then spin it up and install software on the Droplet. In this step, you will provision an Ubuntu 20.04 and install the Nginx web server using Terraform.

A new Terraform configuration file called www-1.tf Create a file that holds the Droplet configuration:

nano www-1.tf

Enter the following lines to define the Droplet source:

resource "digitalocean_droplet" "www-1" {
image = "ubuntu-20-04-x64"
name = "www-1"
region = "nyc3"
size = "s-1vcpu-1gb"
ssh_keys = [
data.digitalocean_ssh_key.terraform.id
]

In the previous configuration, the first line defines a digitalocean_droplet resource called www-1 . The remaining lines specify the properties of the Droplet, including the data center it is located in and the Slug that specifies the size of the Droplet you want to configure. In this case, you are using s-1vcpu-1gb , which creates a Droplet with one CPU and 1GB of RAM. (See this slug size chart to see the available slugs you can use.)

Section ssh_keys Specifies a list of public keys that you want to add to the Droplet. In this case, you specify the key that is in provider.tf Make sure the name here matches the name you defined in provider.tf You have specified that it should match.

When you run Terraform against the DigitalOcean API, it collects various information about the Droplet, such as its public and private IP addresses. This information can be used by other resources in your configuration.

If you don't know which arguments are required or optional for a Droplet source, please refer to the official Terraform documentation: DigitalOcean Droplet Specification

To set up a connection that Terraform can use to connect to the server via SSH, add the following lines to the end of the file:

...
connection {
host = self.ipv4_address
user = "root"
type = "ssh"
private_key = file(var.pvt_key)
timeout = "2m"
}

These lines explain how Terraform should connect to the server, so Terraform can connect via SSH to install Nginx. Using the private key variable var.pvt_key Pay attention—you pass the value to Terraform when you run it.

Now that you have the connection set up, configure the Remote-exec provider, which you will use to install Nginx. To do this, add the following lines to the configuration:

...
provisioner "remote-exec" {
inline = [
"export PATH=$PATH:/usr/bin",
# install nginx
"sudo apt update",
"sudo apt install -y nginx"
]
}
}

Note that the strings in the inline array are the commands that the root user runs to install Nginx.

The completed file looks like this:

resource "digitalocean_droplet" "www-1" {
image = "ubuntu-20-04-x64"
name = "www-1"
region = "nyc3"
size = "s-1vcpu-1gb"
ssh_keys = [
data.digitalocean_ssh_key.terraform.id
]
connection {
host = self.ipv4_address
user = "root"
type = "ssh"
private_key = file(var.pvt_key)
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"export PATH=$PATH:/usr/bin",
# install nginx
"sudo apt update",
"sudo apt install -y nginx"
]
}
}

Save the file and exit the editor. You have defined the server and are ready to deploy it, which you will do now.

Step 4 – Using Terraform to Create an Nginx Server

Your current Terraform configuration describes an Nginx server. Now you will deploy the Droplet exactly as defined.

Order terraform plan Run to see the execution plan, or what Terraform will do to build the infrastructure you described. You need to specify values for the DigitalOcean Access Token and the path to your private key, as your configuration will use this information to access the Droplet for the Nginx installation. To create a plan, run the following command:

terraform plan \
-var "do_token=${DO_PAT}" \
-var "pvt_key=$HOME/.ssh/id_rsa"

You will see output similar to this:

OutputTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# digitalocean_droplet.www-1 will be created
+ resource "digitalocean_droplet" "www-1" {
+ backups = false
+ created_at = (known after apply)
+ disk = (known after apply)
+ graceful_shutdown = false
+ id = (known after apply)
+ image = "ubuntu-20-04-x64"
+ ipv4_address = (known after apply)
+ ipv4_address_private = (known after apply)
+ ipv6 = false
+ ipv6_address = (known after apply)
+ locked = (known after apply)
+ memory = (known after apply)
+ monitoring = false
+ name = "www-1"
+ price_hourly = (known after apply)
+ price_monthly = (known after apply)
+ private_networking = (known after apply)
+ region = "nyc3"
+ resize_disk = true
+ size = "s-1vcpu-1gb"
+ ssh_keys = [
+ "...",
]
+ status = (known after apply)
+ urn = (known after apply)
+ vcpus = (known after apply)
+ volume_ids = (known after apply)
+ vpc_uuid = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
───────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.

The line + resource “digitalocean_droplet” “www-1” means that Terraform will create a new Droplet resource called www-1 with the details that follow. This is exactly what should happen, so run the terraform application command to run the current application:

terraform apply \
-var "do_token=${DO_PAT}" \
-var "pvt_key=$HOME/.ssh/id_rsa"

You will get the same output as before, but this time Terraform will ask you if you want to continue:

Output...
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes

Enter yes and press ENTER. Terraform Droplet provides you with:

Outputdigitalocean_droplet.www-1: Creating...

After a while, you will see Terraform installing Nginx with the remote-exec provider and then the process will complete:

Output
digitalocean_droplet.www-1: Provisioning with 'remote-exec'...
....
digitalocean_droplet.www-1: Creation complete after 1m54s [id=your_www-1_droplet_id]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
...

Terraform has created a new Droplet called www-1 and installed Nginx on it. If you visit the public IP address of your new Droplet, you will see the Nginx welcome page. The public IP is displayed when the Droplet is created, but you can always see it by viewing the current Terraform state. Terraform updates the terraform.tfstate state file each time it runs a plan or updates its state.

To view the current status of your environment, use the following command:

terraform show terraform.tfstate

This shows the public IP address of your Droplet.

Outputresource "digitalocean_droplet" "www-1" {
backups = false
created_at = "..."
disk = 25
id = "your_www-1_droplet_id"
image = "ubuntu-20-04-x64"
ipv4_address = "your_www-1_server_ip"
ipv4_address_private = "10.128.0.2"
...

To verify that your Nginx server is running, go to http://your_www-1_server_ip in your browser.

At this point, you have deployed the Droplet you described in Terraform. Now you will create a second one.

Step 5 – Create a Second Nginx Server

Now that you have an Nginx server configured, you can quickly add a second by copying the existing server configuration file and replacing the Droplet source name and hostname.

You can do this manually, but using the command sed To read the file www-1.tf, replacing all instances www-1 With www-2 and create a new file called www-2.tf It is faster. Here is the command sed To do this there is:

sed 's/www-1/www-2/g' www-1.tf > www-2.tf

You can learn more about using sed by visiting Using sed. sed Learn more.

To preview the changes Terraform will make, run the terraform program again:

terraform plan \
-var "do_token=${DO_PAT}" \
-var "pvt_key=$HOME/.ssh/id_rsa"

The output shows that Terraform will create a second server www-2:

OutputTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# digitalocean_droplet.www-2 will be created
+ resource "digitalocean_droplet" "www-2" {
+ backups = false
+ created_at = (known after apply)
+ disk = (known after apply)
+ id = (known after apply)
+ image = "ubuntu-20-04-x64"
+ ipv4_address = (known after apply)
+ ipv4_address_private = (known after apply)
+ ipv6 = false
+ ipv6_address = (known after apply)
+ locked = (known after apply)
+ memory = (known after apply)
+ monitoring = false
+ name = "www-2"
+ price_hourly = (known after apply)
+ price_monthly = (known after apply)
+ private_networking = true
+ region = "nyc3"
+ resize_disk = true
+ size = "s-1vcpu-1gb"
+ ssh_keys = [
+ "...",
]
+ status = (known after apply)
+ urn = (known after apply)
+ vcpus = (known after apply)
+ volume_ids = (known after apply)
+ vpc_uuid = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
...

Run the terraform application again to create the second droplet:

terraform apply \
-var "do_token=${DO_PAT}" \
-var "pvt_key=$HOME/.ssh/id_rsa"

As before, Terraform will ask you to confirm that you want to continue. Review the plan again and type Yes to continue.

After a while, Terraform creates the new server and displays the results:

Outputdigitalocean_droplet.www-2: Creation complete after 1m47s [id=your_www-2_droplet_id]
...
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Terraform created the new server while not modifying the existing one. You can repeat this step to add additional Nginx servers.

Now that you have two Droplets running Nginx, you will define and deploy a load balancer to divide traffic between them.

Step 6 – Load Balancing

You will use DigitalOcean Load Balancer, which is supported by the official Terraform provider, to route traffic between the two web servers.

A new Terraform configuration file called loadbalancer.tf Create:

nano loadbalancer.tf

Add the following lines to define the Load Balancer:

resource "digitalocean_loadbalancer" "www-lb" {
name = "www-lb"
region = "nyc3"
forwarding_rule {
entry_port = 80
entry_protocol = "http"
target_port = 80
target_protocol = "http"
}
healthcheck {
port = 22
protocol = "tcp"
}
droplet_ids = [digitalocean_droplet.www-1.id, digitalocean_droplet.www-2.id ]
}

The Load Balancer definition specifies its name, the data center it will be located in, the ports it should listen on to balance traffic, the configuration for health checks, and the IDs of the drops it should balance, which you fetch using Terraform variables.

Next, define a status check to make sure the Load Balancer is actually available after deployment:

check "health_check" {
data "http" "lb_check" {
url = "http://${digitalocean_loadbalancer.www-lb.ip}"
}
assert {
condition = data.http.lb_check.status_code == 200
error_message = "${data.http.lb_check.url} returned an unhealthy status code"
}
}

This check requests the status of the Load Balancer IP address via HTTP and verifies that the return code is 200, indicating that the droplets are healthy and available. If there are any errors or a different return code, a warning will be displayed after the deployment steps. When you are finished, save and close the file.

Order again terraform plan Run to review the new executable:

terraform plan \
-var "do_token=${DO_PAT}" \
-var "pvt_key=$HOME/.ssh/id_rsa"

You will see several lines of output, including the following lines:

Output...
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
<= read (data resources)
Terraform will perform the following actions:
# data.http.lb_check will be read during apply
# (config refers to values not yet known)
<= data "http" "lb_check" {
+ body = (known after apply)
+ id = (known after apply)
+ response_body = (known after apply)
+ response_body_base64 = (known after apply)
+ response_headers = (known after apply)
+ status_code = (known after apply)
+ url = (known after apply)
}
# digitalocean_loadbalancer.www-lb will be created
+ resource "digitalocean_loadbalancer" "www-lb" {
+ algorithm = "round_robin"
+ disable_lets_encrypt_dns_records = false
+ droplet_ids = [
+ ...,
+ ...,
]
+ enable_backend_keepalive = false
+ enable_proxy_protocol = false
+ http_idle_timeout_seconds = (known after apply)
+ id = (known after apply)
+ ip = (known after apply)
+ name = "www-lb"
+ project_id = (known after apply)
+ redirect_http_to_https = false
+ region = "nyc3"
+ size_unit = (known after apply)
+ status = (known after apply)
+ urn = (known after apply)
+ vpc_uuid = (known after apply)
+ forwarding_rule {
+ certificate_id = (known after apply)
+ certificate_name = (known after apply)
+ entry_port = 80
+ entry_protocol = "http"
+ target_port = 80
+ target_protocol = "http"
+ tls_passthrough = false
}
+ healthcheck {
+ check_interval_seconds = 10
+ healthy_threshold = 5
+ port = 22
+ protocol = "tcp"
+ response_timeout_seconds = 5
+ unhealthy_threshold = 3
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
│ Warning: Check block assertion known after apply
│ on loadbalancer.tf line 27, in check "health_check":
│ 27: condition = data.http.lb_check.status_code == 200
│ ├────────────────
│ │ data.http.lb_check.status_code is a number
│
│ The condition could not be evaluated at this time, a result will be known when this plan is applied.
╵
...

Since the droplets www-1 and www-2 They already exist, Terraform Load Balancer creates www-lb and checks after provisioning it.

Before deploying, you need to restart the project to add the http dependency used in health_check:

terraform init -upgrade

Then, run the terraform application to create the Load Balancer:

terraform apply \
-var "do_token=${DO_PAT}" \
-var "pvt_key=$HOME/.ssh/id_rsa"

Once again, Terraform will ask you to review the plan. Confirm the plan by entering Yes to continue.

After doing this, you will see the output containing the following lines, truncated for brevity:

Output...
digitalocean_loadbalancer.www-lb: Creating...
...
digitalocean_loadbalancer.www-lb: Creation complete after 1m18s [id=your_load_balancer_id]
data.http.lb_check: Reading...
data.http.lb_check: Read complete after 0s [id=http://lb-ip]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
...

Note that lb_check completed successfully.

From terraform show terraform.tfstate To find your Load Balancer IP address, use:

terraform show terraform.tfstate

You will find the IP under the www-lb entry:

Output...
# digitalocean_loadbalancer.www-lb:
resource "digitalocean_loadbalancer" "www-lb" {
algorithm = "round_robin"
disable_lets_encrypt_dns_records = false
droplet_ids = [
your_www-1_droplet_id,
your_www-2_droplet_id,
]
enable_backend_keepalive = false
enable_proxy_protocol = false
id = "your_load_balancer_id"
ip = "your_load_balancer_ip"
name = "www-lb"
...

Go to http://your_load_balancer_ip in your browser and you will see the Nginx welcome page as the Load Balancer forwards traffic to one of the two Nginx servers.

Now you will learn how to configure DNS for your DigitalOcean account using Terraform.

Step 7 – Create Domains and DNS Records

In addition to Droplets and Load Balancers, Terraform can also create DNS domains and register domains. For example, if you want to point your domain to your Load Balancer, you can write configuration that describes that relationship.

Create a new file to describe your DNS:

nano domain_root.tf

Add the following domain source, replacing your_domain with your domain name:

resource "digitalocean_domain" "default" {
name = "your_domain"
ip_address = digitalocean_loadbalancer.www-lb.ip
}

When finished, save and close the file.

You can also add a CNAME record that www.your_domain to your_domain Create a new file for the CNAME record:

nano domain_cname.tf

Add these lines to the file:

resource "digitalocean_record" "CNAME-www" {
domain = digitalocean_domain.default.name
type = "CNAME"
name = "www"
value = "@"
}

When finished, save and close the file.

To add DNS entries, the application terraform And then terraform application Run it like any other resource.

Go to your domain name and you will see the Nginx welcome page because the domain points to the Load Balancer which sends traffic to one of the two Nginx servers.

Step 8 – Destroy your infrastructure

Although Terraform is not typically used in production environments, it can also destroy existing infrastructure. This is mainly useful in development environments that are deployed and destroyed multiple times.

First, using terraform plan -destroy Create an implementation plan for infrastructure destruction:

terraform plan -destroy -out=terraform.tfplan \
-var "do_token=${DO_PAT}" \
-var "pvt_key=$HOME/.ssh/id_rsa"

Terraform outputs a plan with resources highlighted in red, prefixed with a minus sign, indicating that it will remove your infrastructure resources.

Then from terraform application To implement the plan, use:

terraform apply terraform.tfplan

Terraform will continue to destroy resources as specified in the generated plan.

Result

In this tutorial, you used Terraform to create a load-balanced web infrastructure on DigitalOcean, with two Nginx web servers running behind a DigitalOcean Load Balancer. You learned how to create and destroy resources, view the current state, and use Terraform to configure DNS entries.

Now that you understand how Terraform works, you can create configuration files that describe the server infrastructure for your projects. The example in this tutorial is a good starting point to show how you can automate the deployment of servers. If you already use provisioning tools, you can integrate them with Terraform to configure servers as part of the creation process, rather than using the provisioning method used in this tutorial.

Terraform has many more features and can work with other providers. For more information on how to use Terraform to improve your infrastructure, check out the official Terraform documentation. This tutorial is part of the How to Manage Infrastructure with Terraform series. This series covers a number of Terraform topics, from installing Terraform for the first time to managing complex projects.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like