搭建基于Corosync+DRBD的高可用MySQL集群(2)

6、创建文件系统并挂载(主节点Node1)
文件系统的挂载只能在Primary节点进行,因此在主节点上对drbd设备进行格式化:
# mke2fs -j /dev/drbd0
# mkdir /mydata
# mount /dev/drbd0 /mydata

7、切换主从节点进行测试
Node1:
# cp /etc/inittab /mydata
# umount /mydata
# drbdadm secondary mydrbd
# drbd-overview
  0:mydrbd  Connected Secondary/Secondary UpToDate/UpToDate C r-----

Node2:
123456 # drbdadm primary mydrbd
# drbd-overview
  0:mydrbd  Connected Primary/Secondary UpToDate/UpToDate C r-----
# mkdir /mydata
# mount /dev/drbd0 /mydata
# ls /mydata

8、配置openais/corosync+pacemaker
<1> 安装corosync和pacemaker(SteppingStone)

# cd /root/corosync/
# ls
cluster-glue-1.0.6-1.6.el5.i386.rpm
cluster-glue-libs-1.0.6-1.6.el5.i386.rpm
corosync-1.2.7-1.1.el5.i386.rpm
corosynclib-1.2.7-1.1.el5.i386.rpm
heartbeat-3.0.3-2.3.el5.i386.rpm
heartbeat-libs-3.0.3-2.3.el5.i386.rpm
libesmtp-1.0.4-5.el5.i386.rpm
pacemaker-1.1.5-1.1.el5.i386.rpm
pacemaker-libs-1.1.5-1.1.el5.i386.rpm
resource-agents-1.0.4-1.1.el5.i386.rpm
# step 'mkdir /root/corosync'
# for I in {1..2};do scp *.rpm node$I:/root/corosync;done
# step 'yum -y --nogpgcheck localinstall /root/corosync/*.rpm'
# step 'mkdir /var/log/cluster'

<2> 修改corosync配置并密钥认证(Node1)
1234567891011121314151617181920 # cd /etc/corosync/
# cp corosync.conf.example corosync.conf
# vim corosync.conf
# 修改如下内容:
secauth: on
threads: 2
bindnetaddr: 192.168.1.0
to_syslog: no
# vim corosync.conf
# 添加如下内容:
service {
        ver:    0
        name:  pacemaker
}
aisexec {
        user:  root
        group:  root
}
# corosync-keygen
# scp -p authkey corosync.conf node2:/etc/corosync/

<3> 启动服务并检查(Node1)

# service corosync start
# ssh node2 'service corosync start'
# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log
# grep  TOTEM  /var/log/cluster/corosync.log
# grep pcmk_startup /var/log/cluster/corosync.log

<4> 配置集群属性
禁用stonith设备、关闭法定票数策略、设置默认粘性:
# crm configure property stonith-enabled=false
# crm configure property no-quorum-policy=ignore
# crm configure rsc_defaults resource-stickiness=100

查看集群配置:

# crm configure show
node node1.linuxidc.com
node node2.linuxidc.com
property $id="cib-bootstrap-options" \
        dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="2" \
        stonith-enabled="false" \
        no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
        resource-stickiness="100

9、将已经配置好的drbd设备/dev/drbd0定义为集群服务

<1> 停止drbd服务并关闭自启动(Node1和Node2)

# service drbd stop
# chkconfig drbd off

<2> 配置drbd为集群资源(Node1)
添加mydrbd资源并设置为主从资源:

# crm configure primitive MySQLdrbd ocf:linbit:drbd params drbd_resource=mydrbd op start timeout=240 op stop timeout=100 op monitor role=Master interval=20 timeout=30 op monitor role=Slave interval=30 timeout=30
# crm configure ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

注:高可用资源不可与drbd资源重名;crm status如显示出错,则检查配置后重启corosync服务即可
查看当前集群运行状态:

# crm status   
============
Last updated: Sat Sep 21 23:27:01 2013
Stack: openais
Current DC: node1.linuxidc.com - partition with quorum
Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
2 Nodes configured, 2 expected votes
1 Resources configured.
============
Online: [ node2.linuxidc.com node1.linuxidc.com ]
 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
    Masters: [ node1.linuxidc.com ]
    Slaves: [ node2.linuxidc.com ]

<3> 为主节点上的mydrbd资源创建自动挂载的集群服务(Node1)

# crm configure primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/mydata fstype=ext3 op start timeout=60 op stop timeout=60
# crm configure colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master
# crm configure order mystore_after_ms_mysqldrbd mandatory: ms_mysqldrbd:promote mystore:start

查看资源的运行状态:

# crm status
============
Last updated: Sat Sep 21 23:55:01 2013
Stack: openais
Current DC: node1.linuxidc.com - partition with quorum
Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
2 Nodes configured, 2 expected votes
2 Resources configured.
============
Online: [ node2.linuxidc.com node1.linuxidc.com ]
 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
    Masters: [ node1.linuxidc.com ]
    Slaves: [ node2.linuxidc.com ]
 mystore        (ocf::heartbeat:Filesystem):    Started node1.linuxidc.com

<4> 模拟故障进行测试

将node1设置为standby,则资源转移至node2
# crm node standby
# crm status
============
Last updated: Sat Sep 21 23:59:38 2013
Stack: openais
Current DC: node1.linuxidc.com - partition with quorum
Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
2 Nodes configured, 2 expected votes
2 Resources configured.
============
Node node1.linuxidc.com: standby
Online: [ node2.linuxidc.com ]
 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
    Masters: [ node2.linuxidc.com ]
    Stopped: [ mysqldrbd:0 ]
 mystore        (ocf::heartbeat:Filesystem):    Started node2.linuxidc.com
# ls /mydata/
inittab  lost+found

将node1设置为online,显示node2为主节点
123456789101112131415 # crm node online
# crm status
============
Last updated: Sat Sep 21 23:59:59 2013
Stack: openais
Current DC: node1.linuxidc.com - partition with quorum
Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
2 Nodes configured, 2 expected votes
2 Resources configured.
============
Online: [ node2.linuxidc.com node1.linuxidc.com ]
 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
    Masters: [ node2.linuxidc.com ]
    Slaves: [ node1.linuxidc.com ]
 mystore        (ocf::heartbeat:Filesystem):    Started node2.linuxidc.com

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/1b40328ac44121e2afc0690d2e25cff5.html