搭建基于Corosync+DRBD的高可用MySQL集群

搭建基于Corosync+DRBD的高可用MySQL集群

1、实验环境:

Node1:192.168.1.17(RHEL5.8_32bit,web server)
Node2:192.168.1.18(RHEL5.8_32bit,web server)
SteppingStone:192.168.1.19(RHEL5.8_32bit)
VIP:192.168.1.20

2、准备工作
<1> 配置主机名
节点名称使用/etc/hosts解析;节点名称必须跟uname -n命令的执行结果一致
Node1:
 # hostname node1.linuxidc.com
# vim /etc/sysconfig/network
HOSTNAME=node1.linuxidc.com

Node2:
 # hostname node1.linuxidc.com
# vim /etc/sysconfig/network
HOSTNAME=node2.linuxidc.com

<2> 配置节点ssh基于密钥方式互相通信
Node1:
 # ssh-keygen -t rsa
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2

Node2:
 # ssh-keygen -t rsa
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1

<3> 配置各节点基于主机名互相通信
Node1&Node2:
 # vim /etc/hosts
192.168.1.17  node1.linuxidc.com node1
192.168.1.18  node2.linuxidc.com node2

<4> 配置各节点时间同步
Node1&Node2:
 # crontab -e
*/5 * * * *    /sbin/ntpdate 202.120.2.101 &> /dev/null

<5> 配置跳板机(SteppingStone)
与Node1和Node2建立ssh互信,且基于主机名通信:
 # ssh-keygen -t rsa
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
# vim /etc/hosts
192.168.1.17  node1.linuxidc.com node1
192.168.1.18  node2.linuxidc.com node2

制作同步远程执行命令的step脚本工具:
 # vim step
#!/bin/bash
if [ $# -eq 1 ]; then
  for I in {1..2}; do
    ssh node$I $1;
  done
else
  echo "Usage:step 'COMMANDs'"
fi
# chmod +x step
# mv step /usr/sbin

<6> Node1和Node2两个节点上各提供了一个大小相同的分区作为drbd设备

为各个节点上创建LVM逻辑卷,大小为1G
 # fdisk /dev/sda
n --> e --> n --> +1G --> w
# partprobe /dev/sda

3、安装内核模块和管理工具

安装最新的8.3的版本:
drbd83-8.3.15-2.el5.CentOS.i386.rpm
kmod-drbd83-8.3.15-3.el5.centos.i686.rpm
在SteppingStone上执行远程安装:

# step 'yum -y --nogpgcheck localinstall drbd83-8.3.8-1.el5.centos.i386.rpm kmod-drbd83-8.3.8-1.el5.centos.i686.rpm'

4、配置drbd(Node1)

<1> 复制样例文件为配置文件:

# cp /usr/share/doc/drbd83-8.3.8/drbd.conf  /etc

<2> 配置/etc/drbd.d/global-common.conf
 global {
        usage-count no;    # 禁用信息统计
        # minor-count dialog-refresh disable-ip-verification
}
common {
        protocol C;    # 默认使用同步协议
        handlers {
                # These are EXAMPLE handlers only.
                # They may have severe implications,
                # like hard resetting the node under certain circumstances.
                # Be careful when chosing your poison.
                pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                # split-brain "/usr/lib/drbd/notify-split-brain.sh root";
                # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
                # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
                # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
        }
        startup {
                # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
        }
        disk {
                on-io-error detach;    # 当磁盘IO错误时执行分离
                # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
                # no-disk-drain no-md-flushes max-bio-bvecs
        }
        net {
                # sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers
                # max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret
                # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-corki
                cram-hmac-alg "sha1";    # 同步时验证所使用的算法
                shared-secret "mydrbd7788";    # 共享的密码
        }
        syncer {
                rate 200M;    # 同步速率
                # rate after al-extents use-rle cpu-mask verify-alg csums-alg
        }
}

<3> 定义一个资源/etc/drbd.d/mydrbd.res,内容如下:
 resource mydrbd {
        device  /dev/drbd0;
        disk    /dev/sda5;
        meta-disk internal;
        on node1.linuxidc.com {
                address 192.168.1.17:7789;
        }
        on node2.linuxidc.com {
                address 192.168.1.18:7789;
        }
}

将以上配置的文件全部同步至另外一个节点

# scp -r /etc/drbd.*  node2:/etc

5、在两个节点上初始化已定义的资源并启动服务:

<1> 初始化资源(Node1和Node2):

# drbdadm create-md web

<2> 启动服务(Node1和Node2):
# /etc/init.d/drbd start

<3> 查看启动状态(Node1):
 # cat /proc/drbd
version: 8.3.15 (api:88/proto:86-97)
GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by mockbuild@builder17.centos.org, 2013-03-27 16:04:08
 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:987896

<4> 将当前节点设置为主节点(Node1)

# drbdadm -- --overwrite-data-of-peer primary mydrbd

注:适用于初次设置
再次查看状态:

# drbd-overview
  0:mydrbd  Connected Primary/Secondary UpToDate/UpToDate C r-----

注:Primary/Secondary:当前节点/另一节点

更多详情请继续阅读第2页的内容

相关阅读

Linux 高可用(HA)集群之DRBD详解

DRBD中文应用指南 PDF

CentOS 6.3下DRBD安装配置笔记

基于DRBD+Corosync实现高可用MySQL

CentOS 6.4下DRBD 安装配置

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/1b40328ac44121e2afc0690d2e25cff5.html