Sitemap

Consul + Fabio + Your App

Mike R
7 min readJun 27, 2018

+ Your Application

Sysinfo: Centos 7, Consul 0.9.2, hypothetical Pong application

Background

Need a good way to rout users to a custom application (lets call it Pong, running on Port 8300 (via Apache on port 80). There are 3 nodes running Pong and these will be monitored by Consul

Fabio comes in as an add-on to Consul and does the actual Routing and maintains its own Routing tables based on Consul data.

Fabio is great because you dont need to update Routing table, its completely automated. It reads whatever Service and host is on Consul and works off that.

Note:

This article is not a good solution for apps that need persistent sessions. Consul and Fabio do not support session persistence. This setup is best used for small applications and microservices. For large applications like Splunk or anything Java-based, use HAProxy to balance nodes.

Also, if this setup does not ensure COMPLETE 100% failover as users are routed to 1 single Fabio instance. For true 100% failover, you will need to setup 3 instances of Fabio + a Virtual IP that binds them together and have users access that 1 Virtual IP.

Basic structure:

1. Users go to a shared (virtual) IP, lets say “pong.company.com” -> this is resolved by DNS to 10.155.20.5

2. Virtual IP then passes the request to one of the 2 Consul/Fabio servers

3. Fabio reads healthcheck data from Consul, determines there are 3 active Pong

Configure Consul server cluster

Lets create a 2 node Consul cluster. Consul will monitor application service health, and Fabio will run on top of each Consul and get its routing tables from Consul data.

1. on each Consul server, install Consul
yum install consul

2. generate an Encryption key & Master Token
consul keygen
UAvkAzdjGfQ7J2NlgkrJMA==

geneate a Master Token

uuidgen
dbef8b5a-6110-4575-bf61-dda1c21ca339

3. create Consul dirs
mkdir -p /etc/consul.d/server
mkdir /var/consul

4. add Consul user + group
groupadd consul
useradd consul -g consul

5. change permissions

chown -R consul:consul /var/consul

Consul Config

6. on the 1st Consul node, create new Boostrap config,

{
"bind_addr": "10.185.20.180",
"client_addr": "0.0.0.0",
"data_dir": "/var/consul",
"server": true,
"ui": true,
"bootstrap": true,
"retry_join": ["10.185.20.179","10.185.20.180"],
"datacenter": "mrx",
"enable_script_checks": true,
"encrypt": "UAvkAzdjGfQ7J2NlgkrJMA==",
"enable_syslog": true,
"addresses": {
"http": "10.185.20.180",
"dns": "10.185.20.180"
},
"dns_config": {
"allow_stale": true,
"max_stale": "30s",
"node_ttl": "30s",
"enable_truncate": true,
"only_passing": true
},
"acl_datacenter": "mrx",
"acl_down_policy": "extend-cache",
"acl_default_policy": "allow",
"acl_master_token": "dbef8b5a-6110-4575-bf61-dda1c21ca339"
}

validate the syntax

consul validate /etc/consul.d/*

fix any validation errors

Startup Service

7. create Startup Service

vim /usr/lib/systemd/system/consul.service[Unit]
Description=Consul service discovery agent
Requires=network-online.target
After=network.target
[Service]
User=consul
Group=consul
Restart=on-failure
ExecStartPre=[ -f "/var/consul/consul.pid" ] && /usr/bin/rm -f /var/consul/consul.pid
ExecStart=/usr/bin/consul agent -config-dir=/etc/consul.d -config-file=/etc/consul.d/server/consul.json
ExecReload=/bin/kill -s HUP $MAINPID
KillSignal=SIGINT
TimeoutStopSec=5
[Install]
WantedBy=multi-user.target

systemctl enable consul.service
systemctl daemon-reload
systemctl start consul.service

8. Do the same for 2 remaining Consul servers, change “bootstrap”: false

9. Start service, Web console should be up at <IP>:8500

UI should be available

Create ACL on Consul cluster

Login to Web UI, click on config gear, enter Master ACL token, click Close, this will save the Master token access between your browser and Consul cluster (this is better than using username/password)

Now click ACL button, update Anonymous Token ACL to let Read-Only access

Now create a new ACL for Pong service,

service "" { policy = "write" }
key "pong/" { policy = "write" }
node "" { policy = "write" }
session "" { policy = "write" }

get the Token ID of this Pong ACL policy

go back to each Consul node, change the consul.json setting, change to “Deny”

"acl_default_policy": "deny"

restart Consul service on all server nodes

Configure Consul agent on each Pong instance

on each Pong node, install Consul

Configure Consul agent on each Pong instance, for ACL token add the Pong ACL token from above

pong01> vim /etc/consul.d/client/consul.json

{   
"bind_addr": "10.185.20.173",
"data_dir": "/var/consul",
"ui": false,
"bootstrap": false,
"server": false,
"start_join": ["10.185.20.179","10.185.20.180"],
"datacenter": "mrx",
"encrypt": "UAvkAzdjGfQ7J2NlgkrJMA==",
"enable_syslog": false,
"enable_script_checks": true,
"pid_file": "/var/consul/consul.pid",
"acl_token": "548bb56f-33c9-622a-4351-1a04851ebb1a"
}

Service Health Check

pong01> vim /etc/consul.d/pong.json

{
"service": {
"name": "pong",
"status": "critical",
"check": {
"service_id": "pong",
"interval": "10s",
"script": "/usr/bin/netstat -an | grep 8300 | grep LISTEN"
},
"port": 8300,
"tags": ["pong"]
}
}

This will check the Pong app running on Port 8300

To start Consul, add the same start up scripts as for Consul Server

start Consul

Client should register itself and its Pong service with the cluster

Try stopping Pong and watch for Consul to show the service as Orange (failing)

Fabio config

To rout users to any of the 3 operating Pong instances, you will need Fabio to read the health status of each instance (from Consul)

on each Consul sever, create Fabio user + group

useradd -M -d /opt/fabio -s /sbin/nologin fabio
mkdir -p /opt/fabio/bin

get the Binary
wget https://github.com/fabiolb/fabio/releases/download/v1.5.9/fabio-1.5.9-go1.10.2-linux_amd64 -O /opt/fabio/bin/fabio && chmod +x /opt/fabio/bin/fabio

On each Consul server, Add Fabio properties file, update ui.addr and regitstry.consul.addr, by default Fabio will listen on port 9999

Fabio Properties

vim /opt/fabio/fabio.properties

# These two lines are example of running fabio with HTTPS certificates#proxy.cs = cs=lb;type=file;cert=/opt/fabio/certs.d/mydomain_com.ca-bundle.crt;key=/opt/fabio/certs.d/mydomain_com.key#proxy.addr = :443;cs=lb;tlsmin=tls11;tlsmax=tls12;tlsciphers="0xc02f,0x9f,0xc030,0xc028,0xc014,0x6b,0x39,0x009d,0x0035",#             :80
proxy.addr = :9999
proxy.header.tls = Strict-Transport-Security
proxy.header.tls.value = "max-age=63072000; includeSubDomains"
ui.addr = 10.185.20.180:9998
ui.access = ro
runtime.gogc = 800
log.access.target = stdout
log.access.format = - - [] "" ".Referer" ".User-Agent" "" "" "" ""
log.access.level = INFO
registry.consul.addr = 10.185.20.180:8500
proxy.maxconn = 20000

Create a Fabio startup script

cat <<EOF > /etc/systemd/system/fabio.service
[Unit]
Description=Fabio Proxy
After=syslog.target
After=network.target

[Service]
LimitMEMLOCK=infinity
LimitNOFILE=65535

Type=simple
WorkingDirectory=/opt/fabio
Restart=always
ExecStart=/opt/fabio/bin/fabio -cfg fabio.properties

# Log to syslog with identifier for syslog to process
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=fabio

# No need that fabio messes with /dev
PrivateDevices=yes

# Dedicated /tmp
PrivateTmp=yes

# Make /usr, /boot, /etc read only
ProtectSystem=full

# /home is not accessible at all
ProtectHome=yes

# You will have to run “setcap ‘cap_net_bind_service=+ep’ /opt/fabio/bin/fabio”
# to be able to bind ports under 1024. This directive allows it to happen:
AmbientCapabilities=CAP_NET_BIND_SERVICE

# Only ipv4, ipv6, unix socket and netlink networking is possible
# Netlink is necessary so that fabio can list available IPs on startup
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX AF_NETLINK

# Unprivileged user
User=fabio
Group=fabio

[Install]
WantedBy=multi-user.target
EOF

set permissions
chown -R fabio:fabio /opt/fabio

systemctl enable fabio.service
systemctl daemon-reload
systemctl start fabio.service

check Fabio service

journalctl -u fabio --no-pager -n100

You should now see Fabio service listed on Consul

Fabio should be listening on port 9998, open up browser to <IP of Consul Server>:9998

But there should be no routes yet

Fabio Routes

We will create 1 Route for our Apache thats running on each Pong server.

Fabio will route all requests coming to Fabio hostname, port 9999, it will route to <IP of Pong server>:80

We now have a total of 2 Health Checks, 1. Pong 2. Apache

Consul monitors both, but Fabio will create a route only for Apache,

Apache service check

lets add the Apache healthcheck, we will want to rout users to our Apache (which will use Reverse Proxy using certs to proxy users to Pong Web via HTTPS). Note: Fabio also supports TLS authentication but that is outside the scope of this article

add a new Apache health check, the Tag is using the “urlprefix-” to tell Fabio to create a Route, followed by a slash,
Fabio will route any requests coming to Fabio host (port 9999)

{
"service": {
"name": "apache-svc",
"status": "critical",
"check": {
"service_id": "apache-svc",
"interval": "10s",
"script": "/usr/bin/systemctl status httpd.service"
},
"port": 80,
"tags": ["urlprefix-/"]
}
}

restart Consul client on the Pong node (make sure the Pong healthcheck passes, otherwise Fabio wont show the route)

you should now see Fabio display a proper route

Check the available routes using API

curl -s http://10.185.20.180:9998/api/routes

[{"service":"apache-svc","host":"","path":"/","src":"/","dst":"http://10.185.20.173:80/","opts":"","weight":1,"cmd":"route add","rate1":0,"pct99":0}]

Test Routes

on the Consul/Fabio host, tail Fabio output,

journalctl -u fabio -f

Turn off either Apache or Pong service, watch the routing table get updated automatically

Fabio adds 2 routes, Pong and Apache. I turned off Apache service, Fabio instantly removes Apache route (minus sign)

In the browser, try going to <IP or Hostname of Consul/Fabio host>:9999

It should redirect you to <Pong host>:80, and from there, Apache will take over and reverse proxy you to Pong service running on its own different port

--

--

Responses (2)