基础知识全面整理详解(12)

1) 在172.16.60.208服务器上安装LVS (安装方式如上)
[root@lvs-208 ~]# yum install -y libnl* popt*
[root@lvs-208 ~]# cd /usr/local/src/
[root@lvs-208 src]# unlink /usr/src/linux
[root@lvs-208 src]# ln -s /usr/src/kernels/2.6.32-431.5.1.el6.x86_64/ /usr/src/linux
[root@lvs-208 src]# wget
[root@lvs-208 src]# tar -zvxf ipvsadm-1.26.tar.gz
[root@lvs-208 src]# cd ipvsadm-1.26
[root@lvs-208 ipvsadm-1.26]# make && make install
[root@lvs-208 ipvsadm-1.26]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port          Forward Weight ActiveConn InActConn
 
2) 在后端两个web节点(realserver)上配置vip (连个realserver节点操作一样)
[root@rs-205 ~]# vim /etc/init.d/realserver
#!/bin/sh
VIP=172.16.60.119
. /etc/rc.d/init.d/functions
       
case "$1" in
# 禁用本地的ARP请求、绑定本地回环地址
start)
    /sbin/ifconfig lo down
    /sbin/ifconfig lo up
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
    /sbin/sysctl -p >/dev/null 2>&1
    /sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 up 
    /sbin/route add -host $VIP dev lo:0
    echo "LVS-DR real server starts successfully.\n"
    ;;
stop)
    /sbin/ifconfig lo:0 down
    /sbin/route del $VIP >/dev/null 2>&1
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "LVS-DR real server stopped.\n"
    ;;
status)
    isLoOn=`/sbin/ifconfig lo:0 | grep "$VIP"`
    isRoOn=`/bin/netstat -rn | grep "$VIP"`
    if [ "$isLoON" == "" -a "$isRoOn" == "" ]; then
        echo "LVS-DR real server has run yet."
    else
        echo "LVS-DR real server is running."
    fi
    exit 3
    ;;
*)
    echo "Usage: $0 {start|stop|status}"
    exit 1
esac
exit 0
 
 
执行脚本
[root@rs-205 ~]# chmod 755 /etc/init.d/realserver
[root@rs-205 ~]# /etc/init.d/realserver start
LVS-DR real server starts successfully.\n
 
[root@rs-205 ~]# ifconfig
......
lo:0      Link encap:Local Loopback 
          inet addr:172.16.60.119  Mask:255.255.255.255
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
 
 
后端两个web节点的80端口为nginx, nginx安装配置这里省略
[root@rs-205 ~]# ps -ef|grep nginx
root    24154    1  0 Dec25 ?        00:00:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
nginx    24155 24154  0 Dec25 ?        00:00:02 nginx: worker process                 
root    24556 23313  0 01:14 pts/1    00:00:00 grep nginx
[root@rs-205 ~]# lsof -i:80
COMMAND  PID  USER  FD  TYPE DEVICE SIZE/OFF NODE NAME
nginx  24154  root    7u  IPv4  85119      0t0  TCP *:http (LISTEN)
nginx  24155 nginx    7u  IPv4  85119      0t0  TCP *:http (LISTEN)
 
 
3) 在172.16.60.208服务器上管理LVS
添加LVS集群服务, vip为172.16.60.119
接着添加后面两个realserver,指定传输模式为DR
 
[root@lvs-208~]# /sbin/iptables -F
[root@lvs-208~]# /sbin/iptables -Z
[root@lvs-208~]# /sbin/ipvsadm -C
   
[root@lvs-208~]# /sbin/ipvsadm --set 30 5 60
[root@lvs-208~]# /sbin/ifconfig eth0:0 172.16.60.119 broadcast 172.16.60.119 netmask 255.255.255.255 up
[root@lvs-208~]# /sbin/route add -host 172.16.60.119 dev eth0:0
 
[root@lvs-208~]# /sbin/ipvsadm -A -t 172.16.60.119:80 -s wlc -p 600
[root@lvs-208~]# /sbin/ipvsadm -a -t 172.16.60.119:80 -r 172.16.60.205:80 -g
[root@lvs-208~]# /sbin/ipvsadm -a -t 172.16.60.119:80 -r 172.16.60.206:80 -g
 
[root@lvs-208~]# touch /var/lock/subsys/ipvsadm >/dev/null 2>&1     
[root@lvs-208~]# /sbin/arping -I eth0 -c 5 -s 172.16.60.119 172.16.60.1 >/dev/null 2>&1
 
查看vip
[root@lvs-208~]# ifconfig
......
 
eth0:0    Link encap:Ethernet  HWaddr 00:50:56:AC:5B:56 
          inet addr:172.16.60.119  Bcast:172.16.60.119  Mask:255.255.255.255
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 
查看lvs集群转发情况
[root@lvs-208~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port          Forward Weight ActiveConn InActConn
TCP  172.16.60.119:80 wlc persistent 600
  -> 172.16.60.205:80            Route  1      0          0       
  -> 172.16.60.206:80            Route  1      0          10 
 
访问, 就可以负载到两个realserver的80端口了
 
由于配置了持久化, 则600秒内的客户端请求将会转发到同一个realserver节点上.
如果当前请求转发到172.16.60.206节点上, 则关闭该节点的80端口, 则访问就失败了!
因为手动将该节点从lvs集群中踢出去,如下:
 
[root@lvs-208~]# ipvsadm -d -t 172.16.60.119:80 -r 172.16.60.206
 
[root@lvs-208~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port          Forward Weight ActiveConn InActConn
TCP  172.16.60.119:80 wlc persistent 600
  -> 172.16.60.205:80            Route  1      0          0   
 
然后访问 显示的就是172.16.60.205节点的80端口页面!
 
以上的LVS没有实现后端realserver节点健康检查机制, 如果要想对后端realserver节点进行健康检查,
则需要结合ldirectord软件,  ldirectord配置里有参数可以实现:
realserver节点故障发生时自动踢出lvs集群;
realserver节点故障恢复后重新加入lvs集群;
 
======================================================
ldirectord部分的安装和配置可以参考:  https://www.linuxidc.com/Linux/2019-01/156139.htm
 
[root@lvs-208src]# pwd
/usr/local/src
[root@lvs-208src]# ll ldirectord-3.9.5-3.1.x86_64.rpm
-rw-rw-r-- 1 root root 90140 Dec 24 15:54 ldirectord-3.9.5-3.1.x86_64.rpm
 
[root@lvs-208src]# yum install -y ldirectord-3.9.5-3.1.x86_64.rpm
 
[root@lvs-208src]# cat /etc/init.d/ldirectord |grep "config file"
#              Using the config file /etc/ha.d/ldirectord.cf
#      It uses the config file /etc/ha.d/ldirectord.cf.
 
如上查找可知, ldirectord的配置文件为/etc/ha.d/ldirectord.cf
 
[root@lvs-208src]# cd /usr/share/doc/ldirectord-3.9.5
[root@lvs-208ldirectord-3.9.5]# ll ldirectord.cf
-rw-r--r-- 1 root root 8301 Feb  7  2013 ldirectord.cf
[root@lvs-208ldirectord-3.9.5]# cp ldirectord.cf /etc/ha.d/
[root@lvs-208ldirectord-3.9.5]# cd /etc/ha.d/
[root@lvs-208ha.d]# ll
total 20
-rw-r--r-- 1 root root 8301 Dec 26 01:44 ldirectord.cf
drwxr-xr-x 2 root root 4096 Dec 26 01:40 resource.d
-rw-r--r-- 1 root root 2082 Mar 24  2017 shellfuncs
 
配置ldirectord.cf, 实现realserver节点的健康检查机制 (根据文件中的配置范例进行修改)
[root@lvs-208ha.d]# cp ldirectord.cf ldirectord.cf.bak
[root@lvs-208ha.d]# vim ldirectord.cf
checktimeout=3
checkinterval=1
autoreload=yes
logfile="/var/log/ldirectord.log"
quiescent=no                                #这个参数配置就实现了realserver的监控检查机制
 
virtual=172.16.60.119:80
        real=172.16.60.205:80 gate
        real=172.16.60.206:80 gate
        fallback=127.0.0.1:80 gate    #realserver都故障时, 转发请求到lvs本机的80端口
        service=http
        scheduler=rr
        persistent=600
        #netmask=255.255.255.255
        protocol=tcp
        checktype=negotiate
        checkport=80
        #request="index.html"
        #receive="Test Page"
        #virtualhost=www.x.y.z
 
 
重启ldirectord服务
[root@lvs-208ha.d]# /etc/init.d/ldirectord start
 
[root@lvs-208ha.d]# ps -ef|grep ldirectord
root      4399    1  0 01:48 ?        00:00:00 /usr/bin/perl -w /usr/sbin/ldirectord start
root      4428  3750  0 01:50 pts/0    00:00:00 grep ldirectord
 
这样, 后端的realserver就通过ldirectord配置实现了健康检查!
 
[root@lvs-208ha.d]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port          Forward Weight ActiveConn InActConn
TCP  172.16.60.119:80 rr persistent 600
  -> 172.16.60.205:80            Route  1      0          0       
  -> 172.16.60.206:80            Route  1      0          0 
 
当172.16.60.205 和 172.16.60.206 中的任意一个节点故障时, 该节点就会自动从lvs集群中踢出来, 此时请求都转发至另一个节点上;
该故障节点恢复后, 该节点就会自动重新加入到lvs集群中; 整个过程对于前面的客户端访问来说是无感知.
 
如172.16.60.205节点的80端口挂了, 则lvs转发情况:
[root@lvs-208ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port          Forward Weight ActiveConn InActConn
TCP  172.16.60.119:80 rr persistent 600
  -> 172.16.60.206:80            Route  1      0          0
 
当172.16.60.205节点的80端口恢复后, 则lvs转发情况
[root@lvs-208ha.d]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port          Forward Weight ActiveConn InActConn
TCP  172.16.60.119:80 rr persistent 600
  -> 172.16.60.205:80            Route  1      0          0       
  -> 172.16.60.206:80            Route  1      0          0 
 
这就实现了realserver 层面的高可用了!!!
 
但是此时lvs层是单点, 如果还想实现lvs层的高可用, 就要利用keepalived 或 heartbeat了!

Linux公社的RSS地址https://www.linuxidc.com/rssFeed.aspx

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/7dc54369596ed0ebfe0c3328d9d44772.html