MinIO Cluster With HAProxy Installation & Configuration Part02

In the previous article, we installed and configured the minIO cluster. Now we can install HAProxy and KeepAlived to create highly-available gateway.

Keepalived does the following:

Provides a Virtual IP (VIP) that can float between multiple servers

Monitors the health of services (like HAProxy) using custom scripts

Automatically fails over to a backup node if the primary goes down

Works well with HAProxy to keep load balancing running without manual intervention

 

The following installation & configuration should be done haproxy nodes (192.168.1.131 and 132).

apt update
apt install keepalived
apt install haproxy

 

Create a service user named "keepalived_script". We will use this user for halth check and running the script.

useradd -r -s /bin/false keepalived_script

 

To manage HAProxy's state we need to create the following bash script on both servers (the content of the file is the same)

Keepalived uses this script to decide if the node is in MASTER, BACKUP or FAULT state and according to this info it restarts or stops the HAProxy service. Our HAProxy will be running in Active/Stanby mode.

nano /etc/keepalived/haproxy.sh

 

#!/bin/bash
TYPE=$1
NAME=$2
STATE=$3

case $STATE in
        "MASTER") systemctl start haproxy
                  exit 0
                  ;;
        "BACKUP") systemctl stop haproxy
                  exit 0
                  ;;
        "FAULT")  systemctl stop haproxy
                  exit 0
                  ;;
        *)        echo "unknown state"
                  exit 1
                  ;;
esac
~

 

KeepAlived Configuration:

nano /etc/keepalived/keepalived.conf

 

Node1: (make sure you change gw ip address, ethernet interface name (mine was ens33), virtual ip, source and peer ip addresses )

global_defs {
    enable_script_security
    max_auto_priority
}

vrrp_script check_gw {
    script "/usr/bin/ping -c1 192.168.1.1"
    interval 5
}

vrrp_instance VI_1 {
    interface ens33
    state MASTER
    virtual_router_id 28
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 984322
    }
    virtual_ipaddress {
        192.168.1.130/24 dev ens33
    }
    unicast_src_ip 192.168.1.131
    unicast_peer {

        192.168.1.132
    }
    track_script {
        check_gw
    }
    notify "/etc/keepalived/haproxy.sh" root

 

vrrp_script section: pings gateway every 5 seconds. If ping fails this node's state would change.

VI_1: is the instance name and it should be the same on all nodes (keepalived nodes)

interface ens33: is the interface that virtual IP should get connected. use "ip a" command and see your own interface instead of ens33.

virtual_router_id: Unique ID for VRRP group. All nodes that will share the virtual IP should use the same unique id.

priority: The highest value becomes the MASTER

advert_int : Node send its own state to other nodes every 1 second 

auth_pass:  It is used to verify VRRP packages between keepalived nodes. It should be same on all nodes

unicast_src_ip: node's own ip address

unicast_peer: the remote peer node's ip address

The last line (notify), runs the "haproxy.sh" script if the state changes. For example if the node becomes MASTER, HAProxy service is started or If state becomes BACKUP or FAULT, the haproxy service is stopped.

Node2:

global_defs {
    enable_script_security
    max_auto_priority
}

vrrp_script check_gw {
    script "/usr/bin/ping -c1 192.168.1.1"
    interval 5
}

vrrp_instance VI_1 {
    interface ens33
    state BACKUP
    virtual_router_id 28
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 984322
    }
    virtual_ipaddress {
        192.168.1.130/24 dev ens33
    }
    unicast_src_ip 192.168.1.132
    unicast_peer {

        192.168.1.131
    }
    track_script {
        check_gw
    }
    notify "/etc/keepalived/haproxy.sh" root

 

HAPRoxy Configuration:

HAProxy config file is the same on both nodes. Port 7000 is used for HAProxy status page. 9000 is is for API communication and 9001 is used for console.

nano /etc/haproxy/haproxy.cfg

 

Paste the following config at the end of the file according to your environment.

#creates a web panel for HAProxy status and binds it to port 7000
listen haproxy-stats
    mode http
    bind 192.168.1.130:7000
    stats enable
    stats uri /

# DEV MinIO API Load Balancer
#Requests are distributed to the backends by using roundrobin algorthym. 
#HAProxy listens to the 9000 port of minio backend nodes.
listen minio-api
    mode http
    balance roundrobin
    bind *:9000
#if "200 OK" received from backend, HAProxy determines that the backend is healthy
    option httpchk GET /minio/health/live HTTP/1.1
    http-check send hdr Host myminio
    http-check expect status 200
    default-server inter 3s fall 2 rise 2 on-marked-down shutdown-sessions
    server minio-01 192.168.1.133:9000 check maxconn 300
    server minio-02 192.168.1.134:9000 check maxconn 300
    server minio-03 192.168.1.135:9000 check maxconn 300
    server minio-04 192.168.1.136:9000 check maxconn 300

# MinIO Console Load Balancer
#This is for load balancing the web console. It is bind to virtual_ip:9001
listen minio-console
    mode http
    balance roundrobin
    bind *:9001
    option httpchk GET /minio/health/ready
    http-check expect status 200
    default-server inter 3s fall 2 rise 2 on-marked-down shutdown-sessions
    server minio-01 192.168.1.133:9001 check maxconn 300
    server minio-02 192.168.1.134:9001 check maxconn 300
    server minio-03 192.168.1.135:9001 check maxconn 300
    server minio-04 192.168.1.136:9001 check maxconn 300

 

In summary, there are 3 sections in this configuration:

haproxy-stats> 7000 >    HAProxy control panel  
minio-api>  9000  >MinIO API load balancing   
minio-console> 9001  >MinIO web interface load balancing   

 

 

Check if the haproxy configuration file is valid

haproxy -c -f /etc/haproxy/haproxy.cfg

 

Start and Enable services

systemctl start keepalived
systemctl start haproxy
systemctl enable keepalived
systemctl enable haproxy
systemctl reload haproxy

 

Check if you see any errors

journalctl -u haproxy

 

Note that, we do not use "systemctl restart haproxy" command to restart haproxy service. We should use "systemctl reload haproxy" instead.

 

Browse virtual_IP:7000 to see the HAPRoxy status page. This page shows active backends, number of connections and health status.

 

If you browse virtual_IP:9001, you can see the web console.

 

 

API TEST:

First I created a bucket named bucket01 via "http://virtual_ip:9001/buckets". Upload some files here.

 

 

Then create an Access Key.

 

 

Now open up postman. Workspace > New > HTTP> GET > Enter the API address/bucketname/filename.

Choose Authorization as AWS signature > Enter your Access & Secret Key (this is my test env so it is ok to show my keys here)

Service name must be s3. Then click Send.

 

I think we can wrap up our subject now. You can go further and do some failover tests by shutting down your nodes. Good luck!