本文介绍如何在 Kubernetes 中部署 RabbitMQ 集群,包含从命名空间创建到配置 NFS 存储的详细步骤。
参考文档:
- RabbitMQ 集群部署
- NFS StorageClass 创建
部署步骤
1. 创建命名空间
kubectl create ns rabbitmq
2. 创建 RBAC 权限
创建文件 rabbitmq-rbac.yaml
,内容如下:
apiVersion: v1
kind: ServiceAccount
metadata:name: rmq-clusternamespace: rabbitmq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: rmq-clusternamespace: rabbitmq
rules:- apiGroups:- ""resources:- endpointsverbs:- get
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: rmq-clusternamespace: rabbitmq
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: rmq-cluster
subjects:
- kind: ServiceAccountname: rmq-clusternamespace: rabbitmq
执行命令应用该配置:
kubectl apply -f rabbitmq-rbac.yaml
3. 创建服务 (Service)
创建文件 rabbitmq-service.yaml
,内容如下:
kind: Service
apiVersion: v1
metadata:labels:app: rmq-clustername: rmq-clusternamespace: rabbitmq
spec:clusterIP: Noneports:- name: amqpport: 5672targetPort: 5672selector:app: rmq-cluster
---
kind: Service
apiVersion: v1
metadata:labels:app: rmq-clustertype: LoadBalancername: rmq-cluster-balancernamespace: rabbitmq
spec:ports:- name: httpport: 15672protocol: TCPtargetPort: 15672- name: amqpport: 5672protocol: TCPtargetPort: 5672selector:app: rmq-clustertype: NodePort
执行命令应用该配置:
kubectl apply -f rabbitmq-service.yaml
4. 创建集群的 Secret
创建文件 rabbitmq-secret.yaml
,内容如下:
kind: Secret
apiVersion: v1
metadata:name: rmq-cluster-secretnamespace: rabbitmq
stringData:cookie: ERLANG_COOKIEpassword: RABBITMQ_PASSurl: amqp://RABBITMQ_USER:RABBITMQ_PASS@rmq-cluster-balancerusername: RABBITMQ_USER
type: Opaque
执行命令应用该配置:
kubectl apply -f rabbitmq-secret.yaml
5. 创建 ConfigMap
创建文件 rabbitmq-configmap.yaml
,内容如下:
kind: ConfigMap
apiVersion: v1
metadata:name: rmq-cluster-confignamespace: rabbitmqlabels:addonmanager.kubernetes.io/mode: Reconcile
data:enabled_plugins: |[rabbitmq_management,rabbitmq_peer_discovery_k8s].rabbitmq.conf: |loopback_users.guest = falsedefault_user = RABBITMQ_USERdefault_pass = RABBITMQ_PASS## Clusteringcluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8scluster_formation.k8s.host = kubernetes.default.svc.cluster.localcluster_formation.k8s.address_type = hostnamecluster_formation.k8s.hostname_suffix = .rmq-cluster.rabbitmq.svc.cluster.localcluster_formation.node_cleanup.interval = 10cluster_formation.node_cleanup.only_log_warning = truecluster_partition_handling = autohealqueue_master_locator = min-masters
执行命令应用该配置:
kubectl apply -f rabbitmq-configmap.yaml
6. 创建 StatefulSet
创建文件 rabbitmq-cluster-sts.yaml
,内容如下:
kind: StatefulSet
apiVersion: apps/v1
metadata:labels:app: rmq-clustername: rmq-clusternamespace: rabbitmq
spec:replicas: 3selector:matchLabels:app: rmq-clusterserviceName: rmq-clustertemplate:metadata:labels:app: rmq-clusterspec:containers:- args:- -c- cp -v /etc/rabbitmq/rabbitmq.conf ${RABBITMQ_CONFIG_FILE}; exec docker-entrypoint.sh rabbitmq-servercommand:- shenv:- name: RABBITMQ_DEFAULT_USERvalueFrom:secretKeyRef:key: usernamename: rmq-cluster-secret- name: RABBITMQ_DEFAULT_PASSvalueFrom:secretKeyRef:key: passwordname: rmq-cluster-secret- name: RABBITMQ_ERLANG_COOKIEvalueFrom:secretKeyRef:key: cookiename: rmq-cluster-secret- name: K8S_SERVICE_NAMEvalue: rmq-cluster- name: POD_IPvalueFrom:fieldRef:fieldPath: status.podIP- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: RABBITMQ_USE_LONGNAMEvalue: "true"- name: RABBITMQ_NODENAMEvalue: rabbit@$(POD_NAME).rmq-cluster.$(POD_NAMESPACE).svc.cluster.local- name: RABBITMQ_CONFIG_FILEvalue: /var/lib/rabbitmq/rabbitmq.confimage: registry.cn-beijing.aliyuncs.com/dotbalo/rabbitmq:3.7-management imagePullPolicy: IfNotPresentlivenessProbe:exec:command:- rabbitmqctl- statusinitialDelaySeconds: 30timeoutSeconds: 10name: rabbitmqports:- containerPort: 15672name: httpprotocol: TCP- containerPort: 5672name: amqpprotocol: TCPreadinessProbe:exec:command:- rabbitmqctl- statusinitialDelaySeconds: 10timeoutSeconds: 10volumeMounts:- mountPath: /etc/rabbitmqname: config-volumereadOnly: false- mountPath: /var/lib/rabbitmqname: rabbitmq-storagereadOnly: falseserviceAccountName: rmq-clusterterminationGracePeriodSeconds: 30volumes:- configMap:items:- key: rabbitmq.confpath: rabbitmq.conf- key: enabled_pluginspath: enabled_pluginsname: rmq-cluster-configname: config-volumevolumeClaimTemplates:- metadata:name: rabbitmq-storagespec:accessModes:- ReadWriteManystorageClassName: "nfs-storage"resources:requests:storage: 4Gi
执行命令应用该配置:
kubectl apply -f rabbitmq-cluster-sts.yaml
7. 配置 NFS StorageClass
详细 NFS 配置步骤可参考:NFS StorageClass 配置指南
7.1 创建命名空间
kubectl create namespace nfs
7.2 配置 RBAC 权限
创建文件 rbac.yaml
,内容如下:
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisionernamespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runner
rules:- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: nfs
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisionernamespace: nfs
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisionernamespace: nfs
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: nfs
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io
应用 RBAC 配置:
kubectl apply -f rbac.yaml
7.3 创建 NFS Provisioner
创建文件 nfs-provisioner.yaml
,内容如下:
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisionernamespace: nfs
spec:replicas: 1selector:matchLabels:app: nfs-client-provisionerstrategy:type: Recreatetemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: quay.io/external_storage/nfs-client-provisioner:latestvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: nfs-storage- name: NFS_SERVERvalue: <NFS_SERVER_IP> # 替换为实际 NFS 服务 IP- name: NFS_PATHvalue: <NFS_PATH> # 替换为实际 NFS 路径volumes:- name: nfs-client-rootnfs:server: <NFS_SERVER_IP>path: <NFS_PATH>
应用 NFS Provisioner 配置:
kubectl apply -f nfs-provisioner.yaml
7.4 创建 StorageClass
创建文件 nfs-StorageClass.yaml
,内容如下:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: nfs-storage
provisioner: nfs-storage
parameters:archiveOnDelete: "false"
应用 StorageClass 配置:
kubectl apply -f nfs-StorageClass.yaml
7.5 设置默认 StorageClass
列出当前集群的 StorageClass:
kubectl get storageclass
将 nfs-storage
设置为默认 StorageClass:
kubectl patch storageclass nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
7.6 测试配置
创建测试 PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: test-claimannotations:volume.beta.kubernetes.io/storage-class: "nfs-storage"
spec:accessModes:- ReadWriteManyresources:requests:storage: 1Mi
应用测试 PVC 配置:
kubectl apply -f test-pvc.yaml
检查 PVC 状态:
kubectl get pvc
describe pvc test-claim
如果正常绑定,则配置成功。
7.7 错误处理
如果出现以下错误:
waiting for a volume to be created, either by external provisioner “nfs-storage” or manually created by system administrator
查看 NFS Provisioner Pod 日志:
kubectl logs -n nfs <nfs-client-provisioner-pod-name>
如果日志中报错:
unexpected error getting claim reference: selfLink was empty, can’t make reference
解决方法:
参考链接:CSDN 文章
方法一:
-
查找
kube-apiserver.yaml
文件位置:find / -name kube-apiserver.yaml
-
编辑文件,在
spec.containers.args
中添加:- --feature-gates=RemoveSelfLink=false
示例:
-
保存并退出,等待配置生效。
方法二:
对于 1.26 及以上版本的 Kubernetes 集群,可以使用 Helm 安装驱动:
# 添加 Helm 源
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner# 创建 Namespace(可选)
kubectl create ns nfs-sc-default# 安装 NFS 驱动
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \--set storageClass.name=nfs-sc-default \--set nfs.server=192.168.1.102 \--set nfs.path=/data/storage \--set storageClass.defaultClass=true -n nfs-sc-default
通过上述方法,可成功配置 NFS StorageClass 并解决相关问题。
总结
通过以上步骤,您可以在 Kubernetes 中成功部署 RabbitMQ 集群。