NFS (NAS 网络存储)DRBD高可用

news/2024/11/17 3:26:40/

NFS (NAS 网络存储)DRBD高可用

安装准备

  • 服务器信息
IP地址角色、主机名
192.168.1.97nfs-master.host.com
192.168.1.98nfs-backup.host.com
192.168.1.10keepalived VIP
  • 系统信息
    Static hostname: nfs-master.host.comIcon name: computer-vmChassis: vmMachine ID: d0a403dc7abd4a78b64dc5b22e4db7b4Boot ID: 164d9d5c045f4c2f9496aa40326b38a2Virtualization: vmwareOperating System: CentOS Linux 7 (Core)CPE OS Name: cpe:/o:centos:centos:7Kernel: Linux 3.10.0-1127.el7.x86_64Architecture: x86-64
  • 共享目录配置
Master 与 Bakcup 使用相同路径 /private_data
  • 系统初始化
cat >> /etc/sysctl.d/nfs.conf << EOF
vm.overcommit_memory = 1
net.ipv4.ip_local_port_range = 1024 65536
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_abort_on_overflow = 0
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 262144
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_max_syn_backlog = 262144
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.ipv4.netfilter.ip_conntrack_max = 2097152
net.nf_conntrack_max = 655360
net.netfilter.nf_conntrack_tcp_timeout_established = 1200
EOF
/sbin/sysctl -p /etc/sysctl.d/nfs.conf
# 执行如果出现报错(nf_conntrack_max no such file or directory)执行(modprobe ip_conntrack)重新/sbin/sysctl -p /etc/sysctl.d/nfs.conf即可

安装配置DRBD与 NFS

以下安装在 2 台服务器执行

  • 安装 nfs
yum -y install nfs-utils rpcbind
  • 配置安装DRBD
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum install drbd84 kmod-drbd84 -y
  • 加载模块

    modprobe drbd
    #查看模块是否加载上了
    lsmod |grep drbd
    
  • 查看修改配置文件

    # You can find an example in  /usr/share/doc/drbd.../drbd.conf.example#include "drbd.d/global_common.conf";
    include "drbd.d/*.res";
    
  • 添加资源配置文件

        global {usage-count yes;#是否参与DRBD使用者统计,默认为yes
    }common {syncer { rate 30M; }
    }
    #设置主备节点同步的网络速率最大值,默认单位是字节,我们可以设定为兆resource r0 {#r0为资源名,我们在初始化磁盘的时候就可以使用资源名来初始化。protocol C;#使用 C 协议。handlers {pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f ";  pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt ";local-io-error "echo o > /proc/sysrq-trigger ; halt -f";fence-peer "/usr/lib4/heartbeat/drbd-peer-outdater -t 5";pri-lost "echo pri-lst. Have a look at the log file.mail -s 'Drbd Alert' root";split-brain "/usr/lib/drbd/notify-split-brain.sh root";out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";}net {cram-hmac-alg "sha1";shared-secret "NFS-HA";#drbd同步时使用的验证方式和密码信息}disk {on-io-error detach;fencing resource-only;# 使用DOPD(drbd outdate-peer deamon)功能保证数据不同步的时候不进行切换。}startup {wfc-timeout 120;degr-wfc-timeout 120;}device /dev/drbd0;#这里/dev/drbd0是用户挂载时的设备名字,由DRBD进程创建on nfs-master.host.com {#每个主机名的说明以on开头,后面是hostname(必须在/etc/hosts可解析)disk /dev/sdb1;#使用这个磁盘作为drbd的磁盘/dev/drbd0。address 192.168.1.97:7788;#设置DRBD的监听端口,用于与另一台主机通信meta-disk internal;#drbd的元数据存放方式} on nfs-backup.host.com {disk /dev/sdb1;address 192.168.1.98:7788;meta-disk internal;}
    }
    
  • 硬盘分区,不进行格式化(在 Master 执行即可)

    fdisk /dev/sdb
    n----->p------>1------->回车---->回车------>w
    
  • 创建DRBD设备并激活r0资源

    # 创建drbd 设备 
    mknod /dev/drbd0 b 147 0
    #激活r0资源
    drbdadm create-md r0
    # 根据提示进行操作
    WARN:You are using the 'drbd-peer-outdater' as fence-peer program.If you use that mechanism the dopd heartbeat plugin program needsto be able to call drbdsetup and drbdmeta with root privileges.You need to fix this with these commands:chgrp haclient /lib/drbd/drbdsetup-84chmod o-x /lib/drbd/drbdsetup-84chmod u+s /lib/drbd/drbdsetup-84chgrp haclient /usr/sbin/drbdmetachmod o-x /usr/sbin/drbdmetachmod u+s /usr/sbin/drbdmetainitializing activity log
    initializing bitmap (640 KB) to all zero
    Writing meta data...
    New drbd meta data block successfully created.# useradd -M -s /sbin/nologin haclient   #创建程序用户
    # chgrp haclient /lib/drbd/drbdsetup-84
    # chmod o-x /lib/drbd/drbdsetup-84
    # chmod u+s /lib/drbd/drbdsetup-84# chgrp haclient /usr/sbin/drbdmeta
    #  chmod o-x /usr/sbin/drbdmeta
    #  chmod u+s /usr/sbin/drbdmeta# 再次激活设备
    # drbdadm create-md r0
    
    • 启动DRBD服务

      systemctl start drbd && systemctl enable drbd
      
    • 查看 drbd 状态

      cat /proc/drbd
      version: 8.4.11-1 (api:1/proto:86-101)
      GIT-hash: 66145a308421e9c124ec391a7848ac20203bb03c build by mockbuild@, 2020-04-05 02:58:180: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r-----ns:1276152 nr:63137180 dw:64411784 dr:1877 al:361 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:459931480[=>..................] sync'ed: 12.3% (449148/511980)Mfinish: 3:04:38 speed: 41,512 (40,516) want: 41,040 K/sec
      
    • 配置主服务器,设置DRBD主(nfs-master)

      drbdsetup /dev/drbd0 primary --force  #在主服务器执行,设置为主
      # cat /proc/drbd
      version: 8.4.11-1 (api:1/proto:86-101)
      GIT-hash: 66145a308421e9c124ec391a7848ac20203bb03c build by mockbuild@, 2020-04-05 02:58:180: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----ns:86032412 nr:1276188 dw:8334276 dr:78988028 al:1811 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:437037960[==>.................] sync'ed: 16.7% (426792/511980)Mfinish: 3:03:00 speed: 39,780 (39,904) K/secPrimary/Secondary  表示当前为主,另一台为备
      
    • 在主服务器上,挂载磁盘,(nfs-master)

      ## Master 与 Bakcup 使用相同路径 /private_data
      [root@nfs-master ~]# mkdir /private_data
      [root@nfs-master ~]# mkfs.ext4 /dev/drbd0  #格式化磁盘
      [root@nfs-master ~]# mount /dev/drbd0 /private_data  #挂载磁盘
      
    • 手动切换进行测试

      #主服务器
      [root@nfs-master ~]# touch /private_data/er
      [root@nfs-master ~]# ls /private_data
      er  lost+found
      [root@nfs-master ~]# umount /private_data   #取消挂载
      [root@nfs-master ~]#  drbdsetup /dev/drbd0 secondary  #将主服务器,DRBD设置为备
      #备服务器
      [root@nfs-bakcup ~]# mkdir /private_data   #创建共享目录
      [root@nfs-bakcup ~]#  drbdsetup /dev/drbd0 primary   #将备服务器中DRBD设置为主
      [root@nfs-bakcup ~]# mount /dev/drbd0 /private_data  #挂载硬盘
      [root@nfs-bakcup ~]# ls /private_data  #查看
      er  lost+found  #文件实现了备份
      

配置 NFS高可用

  • 配置 keepalived
# 安装 keepalived
yum install keepalived -y
# 配置(Master)
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.back && cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {notification_email {root@localhost}notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id DRBD_HA_MASTER
}vrrp_script chk_nfs {script "/etc/keepalived/check_nfs.sh"interval 5}vrrp_instance VI_1 {state MASTERinterface eth0virtual_router_id 101priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}track_script {chk_nfs}notify_stop /etc/keepalived/notify_stop.sh       notify_master /etc/keepalived/notify_master.sh   virtual_ipaddress {192.168.1.10/24}
}
# 配置(Slave)
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.back && cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {notification_email {root@localhost}notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id DRBD_HA_BACKUP
}vrrp_instance VI_1 {state BACKUPinterface eth0virtual_router_id 101priority 90advert_int 1authentication {auth_type PASSauth_pass 1111}# 当进入Master状态时会呼叫notify_master# 当进入Backup状态时会呼叫notify_backup# 当发现异常情况时进入Fault状态呼叫notify_fault# 当Keepalived程序终止时则呼叫notify_stopnotify_master /etc/keepalived/notify_master.sh            notify_backup /etc/keepalived/notify_backup.sh          virtual_ipaddress {192.168.1.10/24}
}
# 启动 keepalived(Master、Backup)
systemctl start keepalived && systemctl enable keepalived
  • Keepalived 检测脚本
[root@nfs-master keepalived]# cat check_nfs.sh 
#!/usr/bin/env bash ###检查nfs可用性:进程和是否能够挂载
/usr/bin/systemctl status nfs &>/dev/null
if [ $? -ne 0 ];then###如果服务状态不正常,先尝试重启服务/usr/bin/systemctl restart nfs/usr/bin/systemctl status nfs &>/dev/nullif [ $? -ne 0 ];then###若重启nfs服务后,仍不正常###卸载drbd设备umount /dev/drbd0###将drbd主降级为备drbdadm secondary r0#关闭keepalived/usr/bin/systemctl stop keepalivedfi
fi
cat notify_master.sh
#!/usr/bin/env bash
cmd_drbdadm=$(whereis drbdadm | awk '{print $2}' | tr -d "\n")
cmd_mount=$(whereis mount | awk '{print $2}' | tr -d "\n")
cmd_service=$(whereis systemctl | awk '{print $2}' | tr -d "\n")time=$(date "+%F  %H:%M:%S")
if [[ -n "${cmd_drbdadm}" ]]; thenecho -e "$time    ------notify_master------\n" >> /etc/keepalived/logs/notify_master.log${cmd_drbdadm} primary r0 &>> /etc/keepalived/logs/notify_master.log${cmd_mount} /dev/drbd0 /private_data &>> /etc/keepalived/logs/notify_master.log${cmd_service} restart nfs &>> /etc/keepalived/logs/notify_master.logecho -e "\n" >> /etc/keepalived/logs/notify_master.log
ficat notify_stop.sh
#!/usr/bin/env bashcmd_drbdadm=$(whereis drbdadm | awk '{print $2}' | tr -d "\n")
cmd_umount=$(whereis umount | awk '{print $2}' | tr -d "\n")
cmd_service=$(whereis systemctl | awk '{print $2}' | tr -d "\n")time=$(date "+%F  %H:%M:%S")
if [[ -n "${cmd_drbdadm}" ]]; thenecho -e "$time  ------notify_stop------\n" >> /etc/keepalived/logs/notify_stop.log${cmd_service} stop nfs &>> /etc/keepalived/logs/notify_stop.log${cmd_umount} /private_data &>> /etc/keepalived/logs/notify_stop.log${cmd_drbdadm} secondary r0 &>> /etc/keepalived/logs/notify_stop.logecho -e "\n" >> /etc/keepalived/logs/notify_stop.log
fi
### backup
#!/usr/bin/env bashcmd_drbdadm=$(whereis drbdadm | awk '{print $2}' | tr -d "\n")
cmd_mount=$(whereis mount | awk '{print $2}' | tr -d "\n")
cmd_service=$(whereis systemctl | awk '{print $2}' | tr -d "\n")
time=$(date "+%F  %H:%M:%S")
if [[ -n "${cmd_drbdadm}" ]]; thenecho -e "$time    ------notify_master------\n" >> /etc/keepalived/logs/notify_master.log${cmd_drbdadm} primary r0 &>> /etc/keepalived/logs/notify_master.log${cmd_mount} /dev/drbd0 /private_data &>> /etc/keepalived/logs/notify_master.log${cmd_service} restart nfs &>> /etc/keepalived/logs/notify_master.logecho -e "\n" >> /etc/keepalived/logs/notify_master.log
fi#!/usr/bin/env bashcmd_drbdadm=$(whereis drbdadm | awk '{print $2}' | tr -d "\n")
cmd_umount=$(whereis umount | awk '{print $2}' | tr -d "\n")
cmd_service=$(whereis systemctl | awk '{print $2}' | tr -d "\n")
if [[ -n "${cmd_drbdadm}" ]]; thenecho -e "$time    ------notify_backup------\n" >> /etc/keepalived/logs/notify_backup.log${cmd_service} stop nfs &>> /etc/keepalived/logs/notify_backup.log${cmd_umount} /dev/drbd0 &>> /etc/keepalived/logs/notify_backup.log${cmd_drbdadm} secondary r0 &>> /etc/keepalived/logs/notify_backup.logecho -e "\n" >> /etc/keepalived/logs/notify_backup.log
fi

http://www.ppmy.cn/news/300670.html

相关文章

Ceph分布式存储的搭建(增加mon节点、模拟osd节点故障恢复)

文章目录 一、Ceph分布式存储 1.1、Ceph文件系统简述 1.2、Ceph的优点 二、Ceph架构和名称解释 2.1、Ceph架构 2.2、Ceph集群组件 三、Ceph的存储过程 四、Ceph搭建过程 4.1、环境准备 4.2、搭建集群前的配置 4.3、安装ceph集群 4.4、Ceph扩容操作 4.5、模拟故障osd恢复 一、Ce…

Ceph对象存储单机部署

Ceph对象存储单机部署 一、单节点部署规划 主机名IP地址操作系统磁盘空间角色services-ceph192.168.11.21CentOS Linux release 7.6.1810 (Core)系统盘&#xff1a;50G 数据盘&#xff1a;200Gceph-deploy、monitor、mgr、rgw、mds、osd 二、角色说明 组件名称组件功能Moni…

Hadoop海量级分布式存储

一、Hadoop简介&#xff1a; 1.大数据略知一二&#xff1a; 1&#xff09;大数据&#xff08;big data&#xff09;&#xff0c;指无法在一定时间范围内用常规软件工具进行捕捉、管理和处理的数据集合&#xff0c;是需要新处理模式才能具有更强的决策力、洞察发现力和流程优化…

文件服务器和nas存储,nas存储 文件服务器

nas存储 文件服务器 内容精选 换一换 资源池是BCE所需要使用的计算资源的集合。BCE的资源池由CCE和CCI提供&#xff0c;分为共享资源池、专属资源池、cromwell资源池。批量任务投递时所选择的队列&#xff0c;在进行任务调度时&#xff0c;会根据队列的权重进行优先调度。通过作…

华为分布式文件存储服务器配置,分布式存储服务器

分布式存储服务器 内容精选 换一换 SAP NetWeaver分布式HA部署如图1所示该部署方式是由多个SAP实例组成&#xff0c;一个SAP实例是一组同时开始和结束的进程。在分布式HA系统中&#xff0c;所有实例都运行在独立的云服务器上&#xff0c;主要包括以下实例&#xff1a;ASCS Inst…

nas存储服务器操作系统,nas存储服务器

nas存储服务器 内容精选 换一换 在SAP系统中&#xff0c;如果选择共享文件系统由SFS Turbo而非NFS Server提供时&#xff0c;例如&#xff1a;SAP HANA中的backup卷或者shared卷&#xff0c;您需要创建SFS Turbo文件系统&#xff0c;提供共享路径给SAP节点。如果您选择使用SFS …

Ceph分布式存储架构搭建

一、 ceph概述 随着OpenStack日渐成为开源云计算的标准软件栈&#xff0c;Ceph也已经成为OpenStack的首选后端存储。Ceph是一种为优秀的性能、可靠性和可扩展性而设计的统一的、分布式文件系统。 ceph官方文档 http://docs.ceph.org.cn/ ceph中文开源社区 http://ceph.org.cn/ …

ceph存储

目录 ceph ceph的构成 搭建ceph集群 实现块存储 块存储 快照 案例&#xff1a; 保护快照&#xff0c;防止删除 ceph文件系统 使用MDS ceph ceph被称作面向未来的存储 中文手册&#xff1a; 架构指南 Red Hat Ceph Storage 5 | Red Hat Customer Portal ceph可以实现的存储方式…