admin

【博文推荐】DRBD项目实施之NFS高可用架构(NFS+Heartbeat+Drbd)

admin 运维技术 2022-11-20 596浏览 0
本博文出自博客huangbo929 博主,有任何问题请进入博主页面互动讨论! 博文地址: http://abool.http://mujizhan9.com/xt/661170/blog.51cto.com/8355508/1587880

由于目前线上的两台NFS服务器,一台为主,一台为备。主到备的数据同步,靠rsync来做。由于数据偏重于图片业务,并且还是***的碎图片。在目前的业务框架下,NFS服务是存在单点的,并且数据的同步也不能做完全实时性,从而导致不能确保一致性。因此,出于对业务在线率和数据安全的保障,目前需要一套新的架构来解决 NFS 服务单点和数据实时同步的问题。

然后,就没有然后了。

下面是一个丑到爆的新方案架构图,已经在公司测试环境的部署,并且进行了不完全充分的测试。

架构拓扑:

【博文推荐】DRBD项目实施之NFS高可用架构(NFS+Heartbeat+Drbd)

简单描述:

两台 NFS 服务器,通过 em1 网卡与内网的其他业务服务器进行通信,em2网卡主要负责两台 NFS 服务器之间心跳通信,em3网卡主要负责drbd数据同步的传输。

前面的2台图片服务器通过 NFS 集群提供出来的一个VIP 192.168.0.219 来使用 NFS 集群服务。

一、项目基础设施及信息介绍

1、设备信息

现有的两台NFS存储服务器的硬件配置信息: 
CPU:Intel(R)Xeon(R)CPUE5-26090@2.40GHz 
MEM:16G 
Raid:RAID1 
Disk:SSD200Gx2 
网卡:集成的4个千兆网卡Linkisupat1000Mbps,fullduplex 
前端两台静态图片服务器硬件配置信息: 
略

2、网络

浮动VIP:192.168.0.219#漂浮在M1和M2上,负责对外提供服务 
现有的两台NFS存储服务器的网络配置信息: 
主机名:M1.redhat.sx 
em1:192.168.0.210内网 
em2:172.16.0.210心跳线 
em3:172.16.100.210DRBD千兆数据传输 
主机名:M2.redhat.sx 
em1:192.168.0.211内网 
em2:172.16.0.211心跳线 
em3:172.16.100.211DRBD千兆数据传输

3、系统环境

内核版本:2.6.32-504.el6.x86_64 
系统版本:CentOS6.5 
系统位数:x86_64 
防火墙规则清空 
selinux关闭

4、软件版本

heartbeat-3.0.4-2.el6.x86_64 
drbd-8.4.3 
rpcbind-0.2.0-11.el6.x86_64 
nfs-utils-1.2.3-54.el6.x86_64

二、基础服务配置

这里仅以 M1 服务的配置为例,M2 服务器配置与此相同。

1、配置时间同步

M1端:

[root@M1~]#ntpdatepool.ntp.org 
12Nov14:45:15ntpdate[27898]:adjusttimeserver42.96.167.209offset0.044720sec

M2端:

[root@M2~]#ntpdatepool.ntp.org 
12Nov14:45:06ntpdate[24447]:adjusttimeserver42.96.167.209offset0.063174sec

2、配置/etc/hosts文件

M1端:

[root@M1~]#cat/etc/hosts 
127.0.0.1localhostlocalhost.localdomainlocalhost4localhost4.localdomain4 
::1localhostlocalhost.localdomainlocalhost6localhost6.localdomain6 
192.168.0.210M1.redhat.sx 
192.168.0.211M2.redhat.sx

M2端:

[root@M2~]#cat/etc/hosts 
127.0.0.1localhostlocalhost.localdomainlocalhost4localhost4.localdomain4 
::1localhostlocalhost.localdomainlocalhost6localhost6.localdomain6 
192.168.0.210M1.redhat.sx 
192.168.0.211M2.redhat.sx

3、增加主机间路由

首先先验证 M1 和 M2 的服务器 IP 是否合乎规划

M1端:

[root@M1~]#ifconfig|egrep'Linkencap|inetaddr'#验证现有IP信息 
em1Linkencap:EthernetHWaddrB8:CA:3A:F1:00:2F 
inetaddr:192.168.0.210Bcast:192.168.0.255Mask:255.255.255.0 
em2Linkencap:EthernetHWaddrB8:CA:3A:F1:00:30 
inetaddr:172.16.0.210Bcast:172.16.0.255Mask:255.255.255.0 
em3Linkencap:EthernetHWaddrB8:CA:3A:F1:00:31 
inetaddr:172.16.100.210Bcast:172.16.100.255Mask:255.255.255.0 
loLinkencap:LocalLoopback 
inetaddr:127.0.0.1Mask:255.0.0.0

M2端:

[root@M2~]#ifconfig|egrep'Linkencap|inetaddr' 
em1Linkencap:EthernetHWaddrB8:CA:3A:F1:DE:37 
inetaddr:192.168.0.211Bcast:192.168.0.255Mask:255.255.255.0 
em2Linkencap:EthernetHWaddrB8:CA:3A:F1:DE:38 
inetaddr:172.16.0.211Bcast:172.16.0.255Mask:255.255.255.0 
em3Linkencap:EthernetHWaddrB8:CA:3A:F1:DE:39 
inetaddr:172.16.100.211Bcast:172.16.100.255Mask:255.255.255.0 
loLinkencap:LocalLoopback 
inetaddr:127.0.0.1Mask:255.0.0.0

查看现有路由,然后增加相应的心跳线和drbd数据传输线路的端到端的静态路由条目。目的是为了让心跳检测和数据同步不受干扰。

M1端:

[root@M1network-scripts]#route-n 
KernelIProutingtable 
DestinationGatewayGenmaskFlagsMetricRefUseIface 
172.16.100.00.0.0.0255.255.255.0U000em3 
172.16.0.00.0.0.0255.255.255.0U000em2 
192.168.0.00.0.0.0255.255.255.0U000em1 
169.254.0.00.0.0.0255.255.0.0U100200em1 
169.254.0.00.0.0.0255.255.0.0U100300em2 
169.254.0.00.0.0.0255.255.0.0U100400em3 
0.0.0.0192.168.0.10.0.0.0UG000em1 
[root@M1network-scripts]#/sbin/routeadd-host172.16.0.211devem2 
[root@M1network-scripts]#/sbin/routeadd-host172.16.100.211devem3 
[root@M1network-scripts]#echo'/sbin/routeadd-host172.16.0.211devem2'>>/etc/rc.local 
[root@M1network-scripts]#echo'/sbin/routeadd-host172.16.100.211devem3'>>/etc/rc.local 
[root@M1network-scripts]#tail-2/etc/rc.local 
/sbin/routeadd-host172.16.0.211devem1 
/sbin/routeadd-host172.16.100.211devem1 
[root@M1network-scripts]#route-n 
KernelIProutingtable 
DestinationGatewayGenmaskFlagsMetricRefUseIface 
172.16.0.2110.0.0.0255.255.255.255UH000em2 
172.16.100.2110.0.0.0255.255.255.255UH000em3 
172.16.100.00.0.0.0255.255.255.0U000em3 
172.16.0.00.0.0.0255.255.255.0U000em2 
192.168.0.00.0.0.0255.255.255.0U000em1 
169.254.0.00.0.0.0255.255.0.0U100200em1 
169.254.0.00.0.0.0255.255.0.0U100300em2 
169.254.0.00.0.0.0255.255.0.0U100400em3 
0.0.0.0192.168.0.10.0.0.0UG000em1 
[root@M1network-scripts]#traceroute172.16.0.211 
tracerouteto172.16.0.211(172.16.0.211),30hopsmax,60bytepackets 
1172.16.0.211(172.16.0.211)0.820ms0.846ms0.928ms 
[root@M1network-scripts]#traceroute172.16.100.211 
tracerouteto172.16.100.211(172.16.100.211),30hopsmax,60bytepackets 
1172.16.100.211(172.16.100.211)0.291ms0.273ms0.257ms

M2端:

[root@M2network-scripts]#route-n 
KernelIProutingtable 
DestinationGatewayGenmaskFlagsMetricRefUseIface 
172.16.100.00.0.0.0255.255.255.0U000em3 
172.16.0.00.0.0.0255.255.255.0U000em2 
192.168.0.00.0.0.0255.255.255.0U000em1 
169.254.0.00.0.0.0255.255.0.0U100200em1 
169.254.0.00.0.0.0255.255.0.0U100300em2 
169.254.0.00.0.0.0255.255.0.0U100400em3 
0.0.0.0192.168.0.10.0.0.0UG000em1 
[root@M2network-scripts]#/sbin/routeadd-host172.16.0.210devem2 
[root@M2network-scripts]#/sbin/routeadd-host172.16.100.210devem3 
[root@M2network-scripts]#echo'/sbin/routeadd-host172.16.0.210devem2'>>/etc/rc.local 
[root@M2network-scripts]#echo'/sbin/routeadd-host172.16.100.210devem3'>>/etc/rc.local 
[root@M2network-scripts]#tail-2/etc/rc.local 
/sbin/routeadd-host172.16.0.210devem1 
/sbin/routeadd-host172.16.100.210devem1 
[root@M2network-scripts]#route-n 
KernelIProutingtable 
DestinationGatewayGenmaskFlagsMetricRefUseIface 
172.16.0.2100.0.0.0255.255.255.255UH000em2 
172.16.100.2100.0.0.0255.255.255.255UH000em3 
172.16.100.00.0.0.0255.255.255.0U000em3 
172.16.0.00.0.0.0255.255.255.0U000em2 
192.168.0.00.0.0.0255.255.255.0U000em1 
169.254.0.00.0.0.0255.255.0.0U100200em1 
169.254.0.00.0.0.0255.255.0.0U100300em2 
169.254.0.00.0.0.0255.255.0.0U100400em3 
0.0.0.0192.168.0.10.0.0.0UG000em1 
[root@M2network-scripts]#traceroute172.16.0.210 
tracerouteto172.16.0.210(172.16.0.210),30hopsmax,60bytepackets 
1172.16.0.210(172.16.0.210)0.816ms0.843ms0.922ms 
[root@M2network-scripts]#traceroute172.16.100.210 
tracerouteto172.16.100.210(172.16.100.210),30hopsmax,60bytepackets 
1172.16.100.210(172.16.100.210)0.256ms0.232ms0.215ms

#p#

三、部署 heartbeat 服务

此处仅演示 M1 服务端的安装,M2 的不做复述。

1、安装heartbeat软件

[root@M1~]#cd/etc/yum.repos.d/ 
[root@M1yum.repos.d]#wgethttp://mirrors.163.com/.help/CentOS6-Base-163.repo 
[root@M1yum.repos.d]#rpm-Uvhhttp://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm 
[root@M1yum.repos.d]#sed-i's@#baseurl@baseurl@g'* 
[root@M1yum.repos.d]#sed-i's@mirrorlist@#mirrorlist@g'* 
[root@M1yum.repos.d]#yuminstallheartbeat-y#该命令有时可能需要执行2次

2、配置heartbeat服务

[root@M1yum.repos.d]#cd/usr/share/doc/heartbeat-3.0.4/ 
[root@M1heartbeat-3.0.4]#ll|egrep'ha.cf|authkeys|haresources' 
-rw-r--r--.1rootroot645Dec32013authkeys#heartbeat服务的认证文件 
-rw-r--r--.1rootroot10502Dec32013ha.cf#heartbeat服务主配置文件 
-rw-r--r--.1rootroot5905Dec32013haresources#heartbeat资源文件 
[root@M1heartbeat-3.0.4]#cpha.cfauthkeysharesources/etc/ha.d/ 
[root@M1heartbeat-3.0.4]#cd/etc/ha.d/ 
[root@M1ha.d]#ls 
authkeysha.cfharcharesourcesrc.dREADME.configresource.dshellfuncs

注意:主备节点两端的配置文件(ha.cf,authkeys,haresource)完全相同,下面是各个节点的文件内容

针对heartbeat的配置,主要就是修改ha.cf、authkeys、haresources这三个文件,下面我列出这三个文件的配置信息,大家仅作参考!

a、ha.cf 文件

[root@M1~]#cat/etc/ha.d/ha.cf 
debugfile/var/log/ha-debug 
logfile/var/log/ha-log 
logfacilitylocal0 
keepalive2 
deadtime10 
warntime6 
#initdead120 
udpport694 
#bcastem2 
mcastem2225.0.0.19269410 
auto_failbackon 
respawnhacluster/usr/lib64/heartbeat/ipfail 
nodeM1.redhat.sx 
nodeM2.redhat.sx 
ping192.168.0.1

b、authkeys文件

[root@M1ha.d]#catauthkeys 
auth1#采用何种加密方式 
1crc#无加密 
#2sha1HI!#启用sha1的加密方式 
#3md5Hello!#采用md5的加密方式 
[root@M1ha.d]#chmod600authkeys#该文件必须设置为600权限,不然heartbeat启动会报错

c、haresources文件

[root@M1ha.d]#catharesources 
M1.redhat.sxIPaddr::192.168.0.219/24/em1 
#NFSIPaddr::192.168.0.219/24/em1drbddisk::dataFilesystem::/dev/drbd0::/data::ext4rpcbindnfsd

注意:这个里的nfsd并不是heartbeat自带的,需要自己编写。

针对该脚本的编写需要满足一下需求:

1、有可执行权限

2、必须存放在/etc/ha.d/resource.d或/etc/init.d目录下

3、必须有start、stop这两个功能

具体脚本信息,下文会写。

4、启动heartbeat

[root@M1ha.d]#/etc/init.d/heartbeatstart 
StartingHigh-Availabilityservices:INFO:Resourceisstopped 
Done. 
[root@M1ha.d]#chkconfigheartbeatoff

说明:关闭开机自启动。当服务重启时,需要人工去启动。

5、测试heartbeat

在此步测试之前,请先在 M2 上操作如上步骤!

a、正常状态

[root@M1ha.d]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.210/24brd192.168.0.255scopeglobalem1 
inet192.168.0.219/24brd192.168.0.255scopeglobalsecondaryem1#之前在heartbeat资源文件中定义的VIP 
[root@M2ha.d]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.211/24brd192.168.0.255scopeglobalem1

说明:M1主节点拥有vip地址,M2节点没有。

b、模拟主节点宕机后的状态

[root@M1ha.d]#/etc/init.d/heartbeatstop 
StoppingHigh-Availabilityservices:Done. 
[root@M1ha.d]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.210/24brd192.168.0.255scopeglobalem1 
[root@M2ha.d]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.211/24brd192.168.0.255scopeglobalem1 
inet192.168.0.219/24brd192.168.0.255scopeglobalsecondaryem1

说明:M1宕机后,VIP地址漂移到M2节点上,M2节点成为主节点

c、模拟主节点故障恢复后的状态

[root@M1ha.d]#/etc/init.d/heartbeatstart 
StartingHigh-Availabilityservices:INFO:Resourceisstopped 
Done. 
[root@M1ha.d]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.210/24brd192.168.0.255scopeglobalem1 
inet192.168.0.219/24brd192.168.0.255scopeglobalsecondaryem1

说明:M1节点恢复之后,又抢占回了VIP资源

#p#

四、DRBD安装部署

1、新添加(初始)硬盘

过程略

2、安装drbd

针对drbd的安装,我们不仅可以使用yum的方式,还可以使用编译安装的方式。由于我在操作的时候,无法从当前yum源取得drbd的rpm包,因此我就采用了编译的安装方式。

[root@M1~]#yum-yinstallgccgcc-c++kernel-develkernel-headersflexmake 
[root@M1~]#cd/usr/local/src 
[root@M1src]#wgethttp://oss.linbit.com/drbd/8.4/drbd-8.4.3.tar.gz 
[root@M1src]#tarzxfdrbd-8.4.3.tar.gz 
[root@M1src]#cddrbd-8.4.3 
[root@M1ha.d]#./configure--prefix=/usr/local/drbd--with-km--with-heartbeat 
[root@M1ha.d]#makeKDIR=/usr/src/kernels/2.6.32-504.el6.x86_64/ 
[root@M1ha.d]#makeinstall 
[root@M1ha.d]#mkdir-p/usr/local/drbd/var/run/drbd 
[root@M1ha.d]#cp/usr/local/drbd/etc/rc.d/init.d/drbd/etc/init.d/ 
[root@M1ha.d]#chmod+x/etc/init.d/drbd 
[root@M1ha.d]#modprobedrbd#执行命令加载drbd模块到内核 
[root@M1ha.d]#lsmod|grepdrbd#检查drbd是否被正确的加载到内核 
drbd3102363 
libcrc32c12461drbd

3、配置DRBD

有关DRBD涉及到的配置文件主要是global_common.conf和用户自定义的资源文件(当然,该资源文件可以写到global_common.conf中)。

注意:M1和M2这两个主备节点的以下配置文件完全一样

[root@M1~]#cat/usr/local/drbd/etc/drbd.d/global_common.conf 
global{ 
usage-countno; 
} 
common{ 
protocolC; 
disk{ 
on-io-errordetach;#配置I/O错误处理策略为分离 
no-disk-flushes; 
no-md-flushes; 
} 
net{ 
cram-hmac-alg"sha1";#设置加密算法 
shared-secret"allendrbd";#设置加密密钥 
sndbuf-size512k; 
max-buffers8000; 
unplug-watermark1024; 
max-epoch-size8000; 
after-sb-0pridisconnect; 
after-sb-1pridisconnect; 
after-sb-2pridisconnect; 
rr-conflictdisconnect; 
} 
syncer{ 
rate1024M;#设置主备节点同步时的网络速率 
al-extents517; 
} 
} 
[root@M1~]#cat/usr/local/drbd/etc/drbd.d/drbd.res 
resourcedrbd{#定义一个drbd的资源名 
onM1.redhat.sx{#主机说明以on开头,后面跟主机名称 
device/dev/drbd0;#drbd设备名称 
disk/dev/mapper/VolGroup-lv_drbd;#drbd0使用的是逻辑卷/dev/mapper/VolGroup-lv_drbd 
address172.16.100.210:7789;#设置DRBD监听地址与端口 
meta-diskinternal;#设置元数据盘为内部模式 
} 
onM2.redhat.sx{ 
device/dev/drbd0; 
disk/dev/mapper/VolGroup-lv_drbd; 
address172.16.100.211:7789; 
meta-diskinternal; 
} 
}

4、初始化meta分区

[root@M1drbd]#drbdadmcreate-mddrbd 
Writingmetadata... 
initializingactivitylog 
NOTinitializingbitmap 
Newdrbdmetadatablocksuccessfullycreated.

5、启动drbd服务

此处,我们可以看下M1 和M2 启动drbd服务前后,drbd设备发生的变化

M1端:

[root@M1drbd]#cat/proc/drbd#启动前drbd设备信息 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M1.redhat.sx,2014-11-1116:20:26 
[root@M1drbd]#drbdadmupall#启动drbd,这里也可以使用脚本去启动 
[root@M1drbd]#cat/proc/drbd#启动后drbd设备信息 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M1.redhat.sx,2014-11-1116:20:26 
0:cs:Connectedro:Secondary/Secondaryds:Inconsistent/InconsistentCr----- 
ns:0nr:0dw:0dr:0al:0bm:0lo:0pe:0ua:0ap:0ep:1wo:doos:133615596

M2端:

[root@M2~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M2.redhat.sx,2014-11-1116:25:08 
[root@M2~]#drbdadmupall 
[root@M2~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M2.redhat.sx,2014-11-1116:25:08 
0:cs:Connectedro:Secondary/Secondaryds:Inconsistent/InconsistentCr----- 
ns:0nr:0dw:0dr:0al:0bm:0lo:0pe:0ua:0ap:0ep:1wo:doos:133615596

6、初始化设备同步,并确立主节点(覆盖备节点,保持数据一致)

M1端:

[root@M1drbd]#drbdadm----overwrite-data-of-peerprimarydrbd 
[root@M1drbd]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M1.redhat.sx,2014-11-1116:20:26 
0:cs:SyncSourcero:Primary/Secondaryds:UpToDate/InconsistentCr---n- 
ns:140132nr:0dw:0dr:144024al:0bm:8lo:0pe:17ua:26ap:0ep:1wo:doos:133477612 
[>....................]sync'ed:0.2%(130348/130480)M 
finish:0:16:07speed:137,984(137,984)K/sec

M2端:

[root@M2~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M2.redhat.sx,2014-11-1116:25:08 
0:cs:SyncTargetro:Secondary/Primaryds:Inconsistent/UpToDateCr----- 
ns:0nr:461440dw:461312dr:0al:0bm:28lo:2pe:75ua:1ap:0ep:1wo:doos:133154284 
[>....................]sync'ed:0.4%(130032/130480)M 
finish:0:19:13speed:115,328(115,328)want:102,400K/sec

同步完毕之后状态:

M1端:

[root@M1~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M1.redhat.sx,2014-11-1116:20:26 
0:cs:Connectedro:Primary/Secondaryds:UpToDate/UpToDateCr----- 
ns:133615596nr:0dw:0dr:133616260al:0bm:8156lo:0pe:0ua:0ap:0ep:1wo:doos:0

M2端:

[root@M2~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M2.redhat.sx,2014-11-1116:25:08 
0:cs:Connectedro:Secondary/Primaryds:UpToDate/UpToDateCr----- 
ns:0nr:133615596dw:133615596dr:0al:0bm:8156lo:0pe:0ua:0ap:0ep:1wo:doos:0

7、挂载drbd分区到data数据目录

[root@M1drbd]#mkfs.ext4/dev/drbd0 
mke2fs1.41.12(17-May-2010) 
Filesystemlabel= 
OStype:Linux 
Blocksize=4096(log=2) 
Fragmentsize=4096(log=2) 
Stride=0blocks,Stripewidth=0blocks 
8355840inodes,33403899blocks 
1670194blocks(5.00%)reservedforthesuperuser 
Firstdatablock=0 
Maximumfilesystemblocks=4294967296 
1020blockgroups 
32768blockspergroup,32768fragmentspergroup 
8192inodespergroup 
Superblockbackupsstoredonblocks: 
32768,98304,163840,229376,294912,819200,884736,1605632,2654208, 
4096000,7962624,11239424,20480000,23887872 
 
Writinginodetables:done 
Creatingjournal(32768blocks):done 
Writingsuperblocksandfilesystemaccountinginformation:done 
 
Thisfilesystemwillbeautomaticallycheckedevery21mountsor 
180days,whichevercomesfirst.Usetune2fs-cor-itooverride. 
[root@M1drbd]#mount/dev/drbd0/data/ 
[root@M1drbd]#df-h 
FilesystemSizeUsedAvailUse%Mountedon 
/dev/mapper/VolGroup-lv_root 
50G5.6G42G12%/ 
tmpfs7.8G07.8G0%/dev/shm 
/dev/sda1477M46M406M11%/boot 
/dev/drbd0126G60M119G1%/data

8、测试主节点写入,备节点是否能同步

M1端:

[root@M1drbd]#ddif=/dev/zeroof=/data/testbs=1Gcount=1 
1+0recordsin 
1+0recordsout 
1073741824bytes(1.1GB)copied,1.26333s,850MB/s 
[root@M1drbd]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M1.redhat.sx,2014-11-1116:20:26 
0:cs:Connectedro:Primary/Secondaryds:UpToDate/UpToDateCr----- 
ns:135840788nr:0dw:2225192dr:133617369al:619bm:8156lo:0pe:0ua:0ap:0ep:1wo:doos:0 
[root@M1drbd]#umount/data/ 
[root@M1drbd]#drbdadmdowndrbd#关闭名字为drbd的资源

M2端:

[root@M2~]#cat/proc/drbd#主节点关闭资源之后,查看备节点的信息,可以看到主节点的角色已经变为UnKnown 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M2.redhat.sx,2014-11-1116:25:08 
0:cs:WFConnectionro:Secondary/Unknownds:UpToDate/DUnknownCr----- 
ns:0nr:136889524dw:136889524dr:0al:0bm:8156lo:0pe:0ua:0ap:0ep:1wo:doos:0 
[root@M2~]#drbdadmprimarydrbd#确立自己的角色为primary,即主节点 
[root@M2~]#mount/dev/drbd0/data 
[root@M2~]#cd/data 
[root@M2data]#ls#发现数据还在 
lost+foundtest 
[root@M2data]#du-shtest 
1.1Gtest 
[root@M2data]#cat/proc/drbd#查看当前drbd设备信息 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M2.redhat.sx,2014-11-1116:25:08 
0:cs:WFConnectionro:Primary/Unknownds:UpToDate/DUnknownCr----- 
ns:0nr:136889524dw:136889548dr:1045al:3bm:8156lo:0pe:0ua:0ap:0ep:1wo:doos:24

#p#

五、NFS安装部署

该操作依旧仅以M1为例,M2操作亦如此。

1、安装nfs

[root@M1drbd]#yuminstallnfs-utilsrpcbind-y 
[root@M2~]#yuminstallnfs-utilsrpcbind-y

2、配置 nfs 共享目录

[root@M1drbd]#cat/etc/exports 
/data192.168.0.0/24(rw,sync,no_root_squash,anonuid=0,anongid=0) 
[root@M2~]#cat/etc/exports 
/data192.168.0.0/24(rw,sync,no_root_squash,anonuid=0,anongid=0)

3、启动 rpcbind 和 nfs 服务

[root@M1drbd]#/etc/init.d/rpcbindstart;chkconfigrpcbindoff 
[root@M1drbd]#/etc/init.d/nfsstart;chkconfignfsoff 
StartingNFSservices:[OK] 
StartingNFSquotas:[OK] 
StartingNFSmountd:[OK] 
StartingNFSdaemon:[OK] 
StartingRPCidmapd:[OK] 
[root@M2drbd]#/etc/init.d/rpcbindstart;chkconfigrpcbindoff 
[root@M2drbd]#/etc/init.d/nfsstart;chkconfignfsoff 
StartingNFSservices:[OK] 
StartingNFSquotas:[OK] 
StartingNFSmountd:[OK] 
StartingNFSdaemon:[OK] 
StartingRPCidmapd:[OK]192

4、测试 nfs

[root@C1~]#mount-tnfs-onoatime,nodiratime192.168.0.219:/data/xxxxx/ 
[root@C1~]#df-h|grepdata 
192.168.0.219:/data126G1.1G118G1%/data 
[root@C1~]#cd/data 
[root@C1data]#ls 
lost+foundtest 
[root@C1data]#echo'nolinux'>>nihao 
[root@C1data]#ls 
lost+foundnihaotest 
[root@C1data]#catnihao 
nolinux

六、整合Heartbeat、DRBD和NFS服务

注意,一下修改的heartbeat的文件和脚本都需要在M1和M2上保持相同配置!

1、修改 heartbeat 资源定义文件

修改heartbeat的资源定义文件,添加对drbd服务、磁盘挂载、nfs服务的自动管理,修改结果如下:

[root@M1~]#cat/etc/ha.d/haresources 
M1.redhat.sxIPaddr::192.168.0.219/24/em1drbddisk::drbdFilesystem::/dev/drbd0::/data::ext4nfsd

这里需要注意的是,配置文件中使用的IPaddr、drbddisk都是存在于/etc/ha.d/resource.d/目录下的,该目录下自带了很多服务管理脚本,来提供给heartbeat服务调用。而后面的nfsd,默认heartbeat是不带的,这里附上该脚本。

[root@M1/]#vim/etc/ha.d/resource.d/nfsd 
#!/bin/bash 
# 
case$1in
start) 
/etc/init.d/nfsrestart 
;; 
stop) 
forprocinrpc.mountdrpc.rquotadnfsdnfsd 
do 
killall-9$proc 
done 
;; 
esac 
[root@M1/]#chmod755/etc/ha.d/resource.d/nfsd

虽然,系统自带了nfs的启动脚本,但是在 heartbeat 调用时无法彻底杀死 nfs 进程,因此才需要我们自己编写启动脚本。

2、重启heartbeat,启动 NFS 高可用

一下操作,***按顺序!

[root@M1~]#/etc/init.d/heartbeatstop 
StoppingHigh-Availabilityservices: 
Done. 
[root@M2~]#/etc/init.d/heartbeatstop 
StoppingHigh-Availabilityservices: 
Done. 
[root@M1~]#/etc/init.d/heartbeatstart 
StartingHigh-Availabilityservices:INFO:Resourceisstopped 
Done. 
[root@M2~]#/etc/init.d/heartbeatstart 
StartingHigh-Availabilityservices:INFO:Resourceisstopped 
Done. 
[root@M1~]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.210/24brd192.168.0.255scopeglobalem1 
inet192.168.0.219/24brd192.168.0.255scopeglobalsecondaryem1 
[root@M2~]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.211/24brd192.168.0.255scopeglobalem1 
[root@M1~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M1.redhat.sx,2014-11-1116:20:26 
0:cs:Connectedro:Primary/Secondaryds:UpToDate/UpToDateCr----- 
ns:24936nr:13016dw:37920dr:17307al:15bm:5lo:0pe:0ua:0ap:0ep:1wo:doos:0 
[root@M2~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M2.redhat.sx,2014-11-1116:25:08 
0:cs:Connectedro:Secondary/Primaryds:UpToDate/UpToDateCr----- 
ns:84nr:24dw:37896dr:10589al:14bm:5lo:0pe:0ua:0ap:0ep:1wo:doos:0 
C1端挂载测试: 
[root@C1~]#mount192.168.0.219:/data/data 
[root@C1~]#df-h|grepdata 
192.168.0.219:/data126G60M119G1%/data

OK,可以看出C1客户端能够通过VIP成功挂载NFS高可用存储共享出来的NFS服务。

3、测试

这里,将进行对NFS高可用集群进行测试,看遇到故障之后,是否服务能够正常切换。

a、测试关闭heartbeat服务后,nfs服务是否正常

M1端heartbeat服务宕前,M1端状态:

[root@M1~]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.210/24brd192.168.0.255scopeglobalem1 
inet192.168.0.219/24brd192.168.0.255scopeglobalsecondaryem1 
[root@M1~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M1.redhat.sx,2014-11-1116:20:26 
0:cs:Connectedro:Primary/Secondaryds:UpToDate/UpToDateCr----- 
ns:8803768nr:3736832dw:12540596dr:5252al:2578bm:1lo:0pe:0ua:0ap:0ep:1wo:doos:0

M1端heartbeat服务宕前,M2端状态:

[root@M2~]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.211/24brd192.168.0.255scopeglobalem1 
[root@M2~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M2.redhat.sx,2014-11-1116:25:08 
0:cs:Connectedro:Secondary/Primaryds:UpToDate/UpToDateCr----- 
ns:4014352nr:11417156dw:15431508dr:5941al:1168bm:1lo:0pe:0ua:0ap:0ep:1wo:doos:0

宕掉M1端heartbeat服务:

[root@M1~]#/etc/init.d/heartbeatstop 
StoppingHigh-Availabilityservices:Done.

M1端heartbeat服务宕后,M1端状态:

[root@M1~]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.210/24brd192.168.0.255scopeglobalem1 
[root@M1~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M1.redhat.sx,2014-11-1116:20:26 
0:cs:Connectedro:Secondary/Primaryds:UpToDate/UpToDateCr----- 
ns:11417152nr:4014300dw:15431448dr:7037al:3221bm:1lo:0pe:0ua:0ap:0ep:1wo:doos:0

M1端heartbeat服务宕后,M2端状态:

[root@M2~]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.211/24brd192.168.0.255scopeglobalem1 
inet192.168.0.219/24brd192.168.0.255scopeglobalsecondaryem1 
[root@M2~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M2.redhat.sx,2014-11-1116:25:08 
0:cs:Connectedro:Primary/Secondaryds:UpToDate/UpToDateCr----- 
ns:4014300nr:11417152dw:15431452dr:5941al:1168bm:1lo:0pe:0ua:0ap:0ep:1wo:doos:0

#p#

恢复M1端的heartbeat服务,看M2是否回切

恢复M1端heartbeat服务:

[root@M1~]#/etc/init.d/heartbeatstart 
StartingHigh-Availabilityservices:INFO:Resourceisstopped 
Done.

M1端heartbeat服务恢复后,M1端状态:

[root@M1~]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.210/24brd192.168.0.255scopeglobalem1 
inet192.168.0.219/24brd192.168.0.255scopeglobalsecondaryem1 
[root@M1~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M1.redhat.sx,2014-11-1116:20:26 
0:cs:Connectedro:Primary/Secondaryds:UpToDate/UpToDateCr----- 
ns:11417156nr:4014352dw:15431504dr:7874al:3221bm:1lo:0pe:0ua:0ap:0ep:1wo:doos:0

M1端heartbeat服务恢复后,M2端状态:

[root@M2~]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.211/24brd192.168.0.255scopeglobalem1 
[root@M2~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M2.redhat.sx,2014-11-1116:25:08 
0:cs:Connectedro:Secondary/Primaryds:UpToDate/UpToDateCr----- 
ns:4014352nr:11417156dw:15431508dr:5941al:1168bm:1lo:0pe:0ua:0ap:0ep:1wo:doos:0

C1端针对NFS切换的受影响效果分析:

[root@C1~]#foriin`seq110000`;doddif=/dev/zeroof=/data/test$ibs=10Mcount=1;stat/data/test$i|grep'Access:2014';done#这里仅仅截取部分输出 
1+0recordsin
1+0recordsout 
10485760bytes(10MB)copied,15.1816s,691kB/s 
Access:2014-11-1223:26:15.945546803+0800 
1+0recordsin
1+0recordsout 
10485760bytes(10MB)copied,0.20511s,51.1MB/s 
Access:2014-11-1223:28:11.687931979+0800 
1+0recordsin
1+0recordsout 
10485760bytes(10MB)copied,0.20316s,51.6MB/s 
Access:2014-11-1223:28:11.900936657+0800

注意:目测,NFS必须需要2分钟的延迟。测试了很多方法,这个问题目前尚未解决!

b、测试关闭心跳线之外的网络后,nfs服务是否正常

M1端em1网口宕前,M1端状态:

[root@M1~]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.210/24brd192.168.0.255scopeglobalem1 
inet192.168.0.219/24brd192.168.0.255scopeglobalsecondaryem1 
[root@M1~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M1.redhat.sx,2014-11-1116:20:26 
0:cs:Connectedro:Primary/Secondaryds:UpToDate/UpToDateCr----- 
ns:11417156nr:4014352dw:15431504dr:7874al:3221bm:1lo:0pe:0ua:0ap:0ep:1wo:doos:0

宕掉M1端的em1网口:

[root@M1~]#ifdownem1

M1端em1网口宕后,M1端状态:(在M2端上通过心跳线,SSH到M1端)

[root@M1~]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST>mtu1500qdiscmqstateDOWNqlen1000 
[root@M1~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M1.redhat.sx,2014-11-1116:20:26 
0:cs:Connectedro:Secondary/Primaryds:UpToDate/UpToDateCr----- 
ns:11993288nr:4024660dw:16017944dr:8890al:3222bm:1lo:0pe:0ua:0ap:0ep:1wo:doos:0

M1端em1网口宕后,M2端状态:

[root@M2~]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.211/24brd192.168.0.255scopeglobalem1 
inet192.168.0.219/24brd192.168.0.255scopeglobalsecondaryem1 
[root@M2~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M2.redhat.sx,2014-11-1116:25:08 
0:cs:Connectedro:Primary/Secondaryds:UpToDate/UpToDateCr----- 
ns:4024620nr:11993288dw:16017908dr:7090al:1171bm:1lo:0pe:0ua:0ap:0ep:1wo:doos:0

恢复M1端的em1网口:

[root@M1~]#ifupem1

恢复M1端的em1网口,M1端状态:

[root@M1~]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.210/24brd192.168.0.255scopeglobalem1 
inet192.168.0.219/24brd192.168.0.255scopeglobalsecondaryem1 
[root@M1~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M1.redhat.sx,2014-11-1116:20:26 
0:cs:Connectedro:Primary/Secondaryds:UpToDate/UpToDateCr----- 
ns:11993292nr:4024680dw:16017968dr:9727al:3222bm:1lo:0pe:0ua:0ap:0ep:1wo:doos:0

恢复M1端的em1网口,M2端状态:

[root@M2~]#ipa|grepem1 
2:em1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscmqstateUPqlen1000 
inet192.168.0.211/24brd192.168.0.255scopeglobalem1 
[root@M2~]#cat/proc/drbd 
version:8.4.3(api:1/proto:86-101) 
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@M2.redhat.sx,2014-11-1116:25:08 
0:cs:Connectedro:Secondary/Primaryds:UpToDate/UpToDateCr----- 
ns:4024680nr:11993292dw:16017972dr:7102al:1171bm:1lo:0pe:0ua:0ap:0ep:1wo:doos:0

有关heartbeat和keepalived的脑裂问题,此处不做描述,后面另起文章去写。

以上文章是前一段公司存储改造时,我写的方案,此处分享给大家。

后来在测试过程中,由于NFS是靠RPC机制来进行通信的,受RPCBIND机制的影响,导致NFS服务端切换之后,NFS的客户端会受到1-2分的延迟。在NFS客户端频繁写入的情况下时间可能会更久,在NFS客户端无写入时,依旧需要一分钟多。因此,后来弃用了这种架构。不知道51的博友们,是如何解决NFS服务端切换导致NFS挂载客户端延时这个问题的呢?

继续浏览有关 系统 的文章
发表评论