Keepalived + Lvs 实现高可用负载均衡
- 前言
- 一、keepalived概述:
- 二、设计原理:
- 三、案例:keepalived 实现双机热备;
- 四、keepalived + lvs;
前言
一、keepalived概述:
概述:keepalived 起初为 Lvs 设计的一款强大的辅助工具,保证 Lvs 负载调度器的故障切换以及 web 节点的健康状态检查,后续被应用到很多需要容错的场景下,keepalived 自身基于 VRRP 协议–虚拟路由冗余协议,思科公有协议;
二、设计原理:
应用场景:
设计模块:
1.core 模块:为 keepalived 的核心组件,负责主进程的启动、维护以及全局配置文件的加载和解析;
2.check 模块:负责 real server 节点池内的节点的健康检测;
3.VRRP 模块:在 master 与 backup 之间执行心跳检测;
热备实现过程: 将多个主机以软件的方式组成一个热备组,通过共有的虚拟ip(VIP)地址对外提供服务,同一时刻,热备组中只有一台主机在工作,别的主机冗余状态,当当前在线的主机失效时,其他冗余的主机会自动接替虚拟ip地址,继续提供服务,以保证架构的稳定性;
三、案例:keepalived 实现双机热备;
案例拓扑:
案例环境:
系统类型 | IP地址 | 主机名 | 所需软件 |
---|---|---|---|
Centos 7.4 1708 64bit | 192.168.100.101 | node1.linuxyu.cn | keepalived+httpd |
Centos 7.4 1708 64bit | 192.168.100.102 | node2.linuxyu.cn | keepalived+httpd |
案例步骤:
- 安装 node1 节点上的 httpd 的服务;
- 安装 node2 节点上的 httpd 的服务;
- 在两台 node 节点上安装 keepalived 软件程序(两台安装步骤一致,在此只列出一台);
- 配置 node1 上 master 主节点;
- 配置 node2 上 backup 从节点;
- 客户端访问测试双机热备的效果;
- 扩展内容:上述步骤实现了 keepalived 在 real server
节点池中进行了四层健康检测(传输层,基于协议或者端口),下边展示 keepalived 如何对节点池中进行七层健康检测(应用层);
1. 安装 node1 节点上的 httpd 的服务;
安装并启动 httpd 服务
[root@node1 ~]# yum -y install httpd
[root@node1 ~]# echo "<h1>192.168.100.101</h1>" > /var/www/html/index.html
[root@node1 ~]# systemctl start httpd
[root@node1 ~]# systemctl enable httpd
[root@node1 ~]# netstat -anput |grep 80
tcp6 0 0 :::80 :::* LISTEN 2125/httpd
2. 安装 node2 节点上的 httpd 的服务;
安装并启动 httpd 服务
[root@node2 ~]# yum -y install httpd
[root@node2 ~]# echo "<h1>192.168.100.102</h1>" > /var/www/html/index.html
[root@node2 ~]# systemctl start httpd
[root@node2 ~]# systemctl enable httpd
[root@node2 ~]# netstat -anput |grep 80
tcp6 0 0 :::80 :::* LISTEN 1119/httpd
3. 在两台 node 节点上安装 keepalived 软件程序(两台安装步骤一致,在此只列出一台);
安装 keepalived 并备份 keepalived 主配置文件
[root@node1 ~]# yum -y install keepalived
[root@node1 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
4. 配置 node1 上 master 主节点;
编辑 keepalived 主配置文件
[root@node1 ~]# echo '
global_defs {router_id HA_TEST_R1
}vrrp_instance VI_1 {state MASTER interface eth0 virtual_router_id 1 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 123456 }virtual_ipaddress { 192.168.100.200}
}
' > /etc/keepalived/keepalived.conf
router_id HA_TEST_R1 ##本服务器的名称
vrrp_instance VI_1 { ##定义VRRP热备实例state MASTER ##MASTER表示主服务器interface eth0 ##承载VIP地址的物理接口virtual_router_id 1 ##虚拟路由器的ID号priority 100 ##优先级,数值越大优先级越高advert_int 1 ##通告间隔秒数(心跳频率)authentication { ##认证信息auth_type PASS ##认证类型auth_pass 123456 ##密码字串}virtual_ipaddress {192.168.100.200 ##指定漂移地址(VIP)
启动 keepalived 服务并查看 VIP 是否在此服务器
[root@node1 ~]# systemctl start keepalived
[root@node1 ~]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@node1 ~]# ip a |grep 192.168.100.200inet 192.168.100.200/32 scope global ens32
5. 配置 node2 上 backup 从节点;
编辑 keepalived 主配置文件
[root@node2 ~]# echo '
global_defs {router_id HA_TEST_R2
}vrrp_instance VI_1 {state BACKUPinterface eth0virtual_router_id 1priority 90advert_int 1authentication {auth_type PASSauth_pass 123456 }virtual_ipaddress {192.168.100.200}
}
' > /etc/keepalived/keepalived.conf
router_id HA_TEST_R1 ##本服务器的名称
state BACKUP ##表示从服务器
priority 90 ##优先级,低于主服务器
启动 keepalived 服务并设置开机自启
[root@node2 ~]# systemctl start keepalived
[root@node2 ~]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
6. 客户端访问测试双机热备的效果;
此期间反复断开、连接主服务器网卡进行查看。
6.1 客户机ping 漂移ip进行测试
ping 192.168.100.200 -t
如果有中断很短时间即恢复。说明双机热备生效
6.2 客户机 访问http进行验证
http://192.168.100.200
如果断开主服务器刷新立刻显示192.168.100.102的内容,恢复主服务器刷新页面则显示内容为192.168.100.101,说明双机热备生效
6.3 主、副服务器查看日志记录
tail -f /var/log/messages
7. 扩展内容:上述步骤实现了 keepalived 在 real server
节点池中进行了四层健康检测(传输层,基于协议或者端口),下边展示 keepalived 如何对节点池中进行七层健康检测(应用层);
示例:仅供参考(未经测试,谨慎使用)
[root@node1 ~]# vim /etc/keepalived/keepalived.conf
global_defs {router_id HA_TEST_R1
}vrrp_script check_nginx{script /etc/check_nginx.shinterval 2}vrrp_instance VI_1 {state MASTER interface eth0 virtual_router_id 1 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 123456 }virtual_ipaddress {
192.168.100.200}track_script{check_nginx}
}
virtual_server 192.168.100.200 80 {protocol TCPreal_server 192.168.100.101 80 {weight 3}}
script /etc/check_nginx.sh ##检测脚本interval 2 ##执行间隔时间track_script{ ##在实例中引用脚本check_nginx}
virtual_server 192.168.100.200 80 { ##VIP 配置protocol TCP ##选择协议real_server 192.168.100.101 80 { ##本机地址weight 3 ##服务器权重}}
[root@node1 ~]# vim /etc/keepalived/check_nginx.sh
#!/bin/bash
Count1=`netstat -anput |grep -v grep |grep nginx |wc -l`
if [ $Count1 -eq 0 ]; then/usr/local/nginx/sbin/nginxsleep 2Count2=`netstat -anput |grep -v grep |grep nginx |wc -l`if [ $Count2 -eq 0 ]; then/etc//init.d/keepalived startelseexit 0fi
四、keepalived + lvs;
实验环境:
系统类型 | IP地址 | 主机名 | 所需软件 |
---|---|---|---|
Centos 7.4 1708 64bit | 192.168.100.101 | kp1.linuxyu.cn | keepalived+ipvsadm |
Centos 7.4 1708 64bit | 192.168.100.102 | kp2.linuxyu.cn | keepalived+ipvsadm |
Centos 7.4 1708 64bit | 192.168.100.103 | web1.linuxyu.cn | httpd |
Centos 7.4 1708 64bit | 192.168.100.104 | web2.linuxyu.cn | httpd |
实验拓扑:
实验过程:
在上面 keepalived 双机热备实验基础上进行更改
1、kp1 主调度器
[root@kp1 ~]# systemctl stop httpd
[root@kp1 ~]# systemctl disable httpd
[root@kp1 ~]# yum -y remove httpd
[root@kp1 ~]# netstat -anput | grep httpd
[root@kp1 ~]# yum -y install ipvsadm[root@kp1 ~]# echo '
global_defs {router_id HA_TEST_R1
}
vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 1 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 123456}virtual_ipaddress {192.168.100.200 }
}
virtual_server 192.168.100.200 80 {delay_loop 15lb_algo rrlb_kind DRprotocol TCP real_server 192.168.100.103 80 {weight 1TCP_CHECK {connect_port 80connect_timeout 3nb_get_retry 3delay_before_retry 4}} real_server 192.168.100.104 80 {weight 1TCP_CHECK {connect_port 80connect_timeout 3nb_get_retry 3delay_before_retry 4}}
}
' >/etc/keepalived/keepalived.conf
配置文件解释
vrrp_instance VI_1 {state MASTER 【主服务器】interface eth3 【配置外网卡】virtual_router_id 1【主从ID号要一致】priority 100【优先级高优先】advert_int 1【心跳数(秒)】authentication { 【认证方式】auth_type PASSauth_pass 1111}virtual_ipaddress { 【漂移ip即虚拟ip】202.1.1.254 }
}
【vip、端口】
virtual_server 202.1.1.254 80 {delay_loop 15【健康检查间隔时间秒】lb_algo rr【调度算法】lb_kind DR【群集工作模式】! persistence_timeout 50【连接保持时间】protocol TCP【应用服务采用的协议】
【web节点1】real_server 202.1.1.101 80 {weight 1【权重】TCP_CHECK { 【健康检查方式】connect_port 80【目标端口】connect_timeout 3【连接超时】nb_get_retry 3【重试次数】delay_before_retry 3【重试间隔】}}【一定要注意{}完整性】
[root@kp1 ~]# cat /etc/keepalived/keepalived.conf
加载系统内核的服务模块
[root@kp1 ~]# modprobe ip_vs
查看系统模块运行状态并重启 keepalived 服务
[root@kp1 ~]# lsmod |grep ip_vs
[root@kp1 ~]# systemctl restart keepalived
2、kp2 副调度器
[root@kp2 ~]# service httpd stop
[root@kp2 ~]# yum -y remove httpd
[root@kp2 ~]# netstat -anput |grep httpd
[root@kp2 ~]# yum -y install ipvsadm[root@kp2 ~]# echo '
global_defs {router_id HA_TEST_R2
}
vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 1 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 123456}virtual_ipaddress {192.168.100.200 }
}
virtual_server 192.168.100.200 80 {delay_loop 15lb_algo rrlb_kind DRprotocol TCP real_server 192.168.100.103 80 {weight 1TCP_CHECK {connect_port 80connect_timeout 3nb_get_retry 3delay_before_retry 4}} real_server 192.168.100.104 80 {weight 1TCP_CHECK {connect_port 80connect_timeout 3nb_get_retry 3delay_before_retry 4}}
}
' >/etc/keepalived/keepalived.conf
[root@kp2 ~]# cat /etc/keepalived/keepalived.conf
[root@kp2 ~]# modprobe ip_vs
[root@kp2 ~]# lsmod |grep ip_vs
[root@kp2 ~]# echo "modprobe ip_vs" >>/etc/rc.local
[root@kp2 ~]# systemctl restart keepalived
3、web服务器池配置
3.1 web1 服务器,web2 服务器同样配置
[root@web1 ~]# cd /etc/sysconfig/network-scripts/
[root@web1 ~]# cp ifcfg-lo ifcfg-lo:0[root@web1 ~]# echo '
DEVICE=lo:0
IPADDR=192.168.100.200
NETMASK=255.255.255.255
ONBOOT=yes
'>/etc/sysconfig/network-scripts/ifcfg-lo:0[root@web1 network-scripts]# systemctl restart network
[root@web1 network-scripts]# ip a[root@web1 ~]# echo "route add -host 192.168.100.200 dev lo:0" >>/etc/rc.local[root@web1 ~]# route add -host 192.168.100.200 dev lo:0
[root@web1 ~]# route -n[root@web1 ~]# echo '
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
' >> /etc/sysctl.conf[root@web1 ~]# sysctl -p
3.2 web1 服务器
[root@web1 ~]# yum -y install httpd[root@web1 ~]# echo "<h1>web1</h1>" >/var/www/html/index.html[root@web ~]# systemctl start httpd
[root@web ~]# netstat -anput | grep httpd
3.3 web2 服务器
[root@web2 ~]# yum -y install httpd[root@web2 ~]# echo "<h1>web2</h1>" >/var/www/html/index.html[root@web2 ~]# systemctl start httpd
[root@web2 ~]# netstat -anput | grep httpd
4.测试lvs+Keepalived高可用集群
验证1:lvs负载均衡是否正常
在客户机浏览器访问192.168.100.200,查看是否能够切换网页,
验证2:节点及服务健康检查
1.开启lvs主调度器的日志监控
tail -f /var/log/messages
2.关闭web1的apache服务,查看是否触发主调度器的apache服务健康检查
3. 关闭web1的网络连接,查看是否触发节点健康检查
验证3:双机热备检查
1.关闭lvs主调度器
2.客户机是否能够正常访问网页
验证4:服务器恢复正常后是否能够切换
链接: CSDN个人资源永久下载地址
提取码:idmz