Introduction
In this tutorial, we will set up a HashiCorp Nomad cluster with Consul for service and node discovery.
With 3 server nodes and an arbitrary number of client nodes, this setup can serve as a foundation for growing projects.
We will also create a Snapshot of the Hetzner Cloud for clients, allowing for the addition of more clients without manual configuration.
The cluster runs on a private network between servers and supports all Nomad and Consul features out of the box, such as service discovery and volume management.
This tutorial partially follows the recommended steps in the official Consul and Nomad deployment guide.
Prerequisites
- A Hetzner Cloud account
- Basic knowledge of Linux and shell commands
- Ability to connect to the server with ssh
- Have a Hetzner server
This tutorial has been tested on Ubuntu 24.04 Hetzner Cloud servers with Nomad version 1.9.3 and Consul 1.20.1.
Terms and symbols
Commands
local$ <command> این دستور باید روی ماشین محلی اجرا شود.
server$ <command> این دستور باید بهعنوان root روی سرور اجرا شود.
Step 1 – Create the base image
The following resource will be used in this step:
- 1 Hetzner Cloud Server Type CX22
We will start by setting up a Consul/Nomad server on a new CX22 server. The snapshot created will serve as the base image for all cluster servers and clients in the following steps.
This guide shows the setup for 3 servers. This makes the cluster highly available but not too expensive. Setting up a single server is not recommended, but it is possible. The tutorial will include comments on the changes needed for 1 or more of the 3 servers in the relevant steps.
Go to the Hetzner Cloud web interface and create a new CX22 server with Ubuntu 24.04.
Step 1.1 – Installing Consul
Install Consul
For more information about available versions, visit the official website.
server$ wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
server$ echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | tee /etc/apt/sources.list.d/hashicorp.list
server$ apt update && apt install consulNow we can add autocomplete functionality to Consul (optional)
server$ consul -autocomplete-installPreparing TLS certificates for Consul
server$ consul tls ca create
server$ consul tls cert create -server -dc dc1
server$ consul tls cert create -server -dc dc1
server$ consul tls cert create -server -dc dc1
server$ consul tls cert create -client -dc dc1If you want to run fewer or more than 3 servers in the Consul cluster, repeat the command consul tls cert create -server -dc dc1 Adapt.
For a cluster server, only one certificate is required, for 5 servers, you need 5 certificates.
These commands will create 3 server certificates and one client certificate.
Folder /root/ It should now contain at least the following files:
server$ ls
consul-agent-ca.pem
consul-agent-ca-key.pem
dc1-server-consul-0.pem
dc1-server-consul-0-key.pem
dc1-server-consul-1.pem
dc1-server-consul-1-key.pem
dc1-server-consul-2.pem
dc1-server-consul-2-key.pem
dc1-client-consul-0.pem
dc1-client-consul-0-key.pemStep 1.2 – Installing the Nomad binary
Install Nomad
For more information about available versions, visit the official website.
server$ wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
server$ echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | tee /etc/apt/sources.list.d/hashicorp.list
server$ apt update && apt install nomadAdd autocomplete for Nomad (optional):
server$ nomad -autocomplete-installThe following configuration file is /etc/nomad.d/nomad.hcl Add:
datacenter = "dc1"
data_dir = "/opt/nomad"Step 1.3 - Preparing systemd services
Consul and Nomad should start automatically after reboot. To do this, create a systemd service for each.
First, set all permissions:
server$ chown consul:consul dc1-server-consul*
server$ chown consul:consul dc1-client-consul*
server$ chown -R consul:consul /opt/consul
server$ chown -R nomad:nomad /opt/nomad
server$ mkdir -p /opt/alloc_mounts && chown -R nomad:nomad /opt/alloc_mountsThe following configuration file is /etc/systemd/system/consul.service Add:
[Unit]
Description="HashiCorp Consul - A service mesh solution"
Documentation=https://www.consul.io/
Requires=network-online.target
After=network-online.target
ConditionFileNotEmpty=/etc/consul.d/consul.hcl
[Service]
EnvironmentFile=-/etc/consul.d/consul.env
User=consul
Group=consul
ExecStart=/usr/bin/consul agent -config-dir=/etc/consul.d/
ExecReload=/bin/kill --signal HUP $MAINPID
KillMode=process
KillSignal=SIGTERM
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.targetand the following configuration file to /etc/systemd/system/nomad.service Add:
[Unit]
Description=Nomad
Documentation=https://www.nomadproject.io/docs/
Wants=network-online.target
After=network-online.target
[Service]
User=nomad
Group=nomad
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/bin/nomad agent -config /etc/nomad.d
KillMode=process
KillSignal=SIGINT
LimitNOFILE=65536
LimitNPROC=infinity
Restart=on-failure
RestartSec=2
OOMScoreAdjust=-1000
TasksMax=infinity
[Install]
WantedBy=multi-user.targetDo not enable these services yet, as the setup is not complete.
Step 1.5 – Create a Base Snapshot
Finally, stop the server in the Hetzner Cloud console and create a Snapshot. This Snapshot will be used as the basis for configuring the server and cluster clients.
After the snapshot creation is successful, delete the CX22 instance from this step.
Step 2 – Setting up cluster servers
In this step, you will create 3 cluster servers from the base image created in step 1.
These servers form the basis of your cluster and dynamically elect a cluster leader.
- 1 Hetzner Cloud Network
- 3 Hetzner Cloud Servers Type CX22
In the Hetzner Cloud Console, create 3 CX22 servers from the Snapshot created in Step 1 and a shared Cloud Network.
This guide uses the 10.0.0.0/8 network, but smaller networks will work as well.
In the following steps, the tutorial will mention servers with internal addresses 10.0.0.2, 10.0.0.3, and 10.0.0.4.
If your servers have different internal addresses, replace them in the next steps.
Step 2.1 – Create a symmetric encryption key
First, create a symmetric encryption key that will be shared between all servers. Store this key in a safe place; we will need it in the next steps.
server$ consul keygenStep 2.2 – Configuring Consul
Create a Consul configuration file for each server. This file will contain the internal IP addresses of the other servers.
For each server in the path /etc/consul.d/consul.hcl Create the following file:
datacenter = "dc1" data_dir = "/opt/consul" encrypt = "" ca_file = "/root/consul-agent-ca.pem" cert_file = "/root/dc1-server-consul-0.pem" key_file = "/root/dc1-server-consul-0-key.pem" verify_incoming = true verify_outgoing = true verify_server_hostname = true performance { raft_multiplier = 1 } retry_join = ["10.0.0.2", "10.0.0.3", "10.0.0.4"] bind_addr = "0.0.0.0" client_addr = "0.0.0.0" server = true bootstrap_expect = 3 ui = true
Replace <CONSUL_KEY> with the key generated from the command consul keygen.
For other servers, cert_file and key_file Change it to match the certificate for that server.
Step 2.3 – Configuring Nomad
Create a Nomad configuration file for each server. This file is located in the path /etc/nomad.d/nomad.hcl It will be:
datacenter = "dc1"
data_dir = "/opt/consul"
encrypt = "your-symmetric-encryption-key"
tls {
defaults {
ca_file = "/etc/consul.d/consul-agent-ca.pem"
cert_file = "/etc/consul.d/dc1-server-consul-0.pem"
key_file = "/etc/consul.d/dc1-server-consul-0-key.pem"
verify_incoming = true
verify_outgoing = true
},
internal_rpc {
verify_server_hostname = true
}
}
retry_join = ["10.0.0.2"]
bind_addr = "{{ GetPrivateInterfaces | include \"network\" \"10.0.0.0/8\" | attr \"address\" }}"
acl = {
enabled = true
default_policy = "allow"
enable_token_persistence = true
}
performance {
raft_multiplier = 1
}Replace addresses 10.0.0.x With the internal address of each server.
Step 2.4 – Activate and launch services
After creating the configuration files, enable and start Consul and Nomad as systemd services:
server$ systemctl enable consul server$ systemctl start consul server$ systemctl enable nomad server$ systemctl start nomadCheck the status of each service using the following commands:
server$ systemctl status consul
server$ systemctl status nomadStep 2.5 – Verify the Consul cluster
To check the status of the Consul cluster, run the following command:
server$ consul membersYou should see a list of all servers and clients in the cluster. If there are any problems, double-check the configuration files and read the logs:
server$ journalctl -u consulStep 2.6 – Verifying the Nomad Cluster
Check the status of the Nomad cluster using the following command:
server$ nomad server membersYou should see a list of Nomad servers in the cluster. If there are any problems, check the configuration files and read the logs:
server$ journalctl -u nomadStep 2.7 – Check the web interface
The Consul and Nomad web interfaces should now be available:
- Consul:
http:// :8500 - Nomad:
http:// :4646
Replace <SERVER_IP> with the server's public or private IP address.
Step 3 – Add Clients
Now, you can add Nomad clients. This step is similar to setting up the server, but with the client role.
Step 3.1 – Install and configure clients
The installation steps for Nomad clients are similar to those for servers. First, install Consul and Nomad on the client machines.
These steps include downloading packages and installing them.
Step 3.2 – Setting up Consul for clients
A Consul configuration file in the path /etc/consul.d/consul.hcl Create. For clients, configuration is simpler:
client {
enabled = true
network_interface = "{{ GetPrivateInterfaces | include \"network\" \"10.0.0.0/8\" | attr \"name\" }}"
}
acl {
enabled = true
}Note that the IP addresses in retry_join These are the internal addresses of Consul servers.
The certificate and key files (cert and key) must be client-specific.
Step 3.3 – Setting up Nomad for clients
Place the Nomad configuration file in the path /etc/nomad.d/nomad.hcl Create. For clients, this configuration includes the following settings:
datacenter = "dc1" data_dir = "/opt/nomad" bind_addr = "0.0.0.0" advertise { http = "10.0.1.2:4646" rpc = "10.0.1.2:4647" serf = "10.0.1.2:4648" } client { enabled = true network_interface = "eth0" servers = ["10.0.0.2:4647", "10.0.0.3:4647", "10.0.0.4:4647"] } tls { http = true rpc = true ca_file = "/root/consul-agent-ca.pem" cert_file = "/root/dc1-client-consul-0.pem" key_file = "/root/dc1-client-consul-0-key.pem" verify_server_hostname = true }Address advertise It should point to the internal address of this client. Also, the addresses servers They are the same as Nomad servers.
Step 3.4 – Activate and launch services
As with servers, enable and configure the Consul and Nomad services for clients:
client$ systemctl enable consul
client$ systemctl start consul
client$ systemctl enable nomad
client$ systemctl start nomadCheck the status of services:
client$ systemctl status consul
client$ systemctl status nomadStep 3.5 – Checking Client Connectivity
Make sure that clients are connected to the Consul cluster. From a server, run the following command:
server$ consul membersThe clients should be visible in the list. Also, for Nomad:
server$ nomad node statusClients should be in the “ready” state. If there is a problem, check the configuration files and read the logs:
client$ journalctl -u consul
client$ journalctl -u nomadStep 4 – Running tasks in Nomad
After ensuring proper setup, you can define and run your jobs on Nomad.
Create a task configuration file, for example:
job "example" { datacenters = ["dc1"] group "example-group" { task "example-task" { driver = "docker" config { image = "nginx:latest" } resources { cpu = 500 memory = 256 } } } }Name the above file example.nomad Save and run with the following command:
server$ nomad job run example.nomadCheck the status of the task execution:
server$ nomad job status exampleResult
You have successfully set up a Consul and Nomad cluster. You can now use this infrastructure to manage your tasks and services.









