文章目录
Kubernetes平台存储系统实战
一、块存储(RDB)
1、配置
二、STS案例实战
三、文件存储(CephFS)
1、配置
2、测试
四、pvc扩容
动态卷扩容
Kubernetes平台存储系统实战
一、块存储(RDB)
RDB: RADOS Block Devices
RADOS: Reliable, Autonomic Distributed Object Store
不能是RWX模式
1、配置
RWO:(ReadWriteOnce)
参考文档:Ceph Docs
常用块存储 。RWO模式;STS删除,pvc不会删除,需要自己手动维护
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:name: replicapoolnamespace: rook-ceph
spec:failureDomain: host #容灾模式,host或者osdreplicated:size: 2 #数据副本数量
---
apiVersion: storage.k8s.io/v1
kind: StorageClass #存储驱动
metadata:name: rook-ceph-block
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:# clusterID is the namespace where the rook cluster is runningclusterID: rook-ceph# Ceph pool into which the RBD image shall be createdpool: replicapool# (optional) mapOptions is a comma-separated list of map options.# For krbd options refer# https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options# For nbd options refer# https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options# mapOptions: lock_on_read,queue_depth=1024# (optional) unmapOptions is a comma-separated list of unmap options.# For krbd options refer# https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options# For nbd options refer# https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options# unmapOptions: force# RBD image format. Defaults to "2".imageFormat: "2"# RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.imageFeatures: layering# The secrets contain Ceph admin credentials.csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisionercsi.storage.k8s.io/provisioner-secret-namespace: rook-cephcsi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisionercsi.storage.k8s.io/controller-expand-secret-namespace: rook-cephcsi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-nodecsi.storage.k8s.io/node-stage-secret-namespace: rook-ceph# Specify the filesystem type of the volume. If not specified, csi-provisioner# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock# in hyperconverged settings where the volume is mounted on the same node as the osds.csi.storage.k8s.io/fstype: ext4# Delete the rbd volume when a PVC is deleted
reclaimPolicy: Delete
allowVolumeExpansion: true
二、STS案例实战
apiVersion: apps/v1
kind: StatefulSet
metadata:name: sts-nginxnamespace: default
spec:selector:matchLabels:app: sts-nginx # has to match .spec.template.metadata.labelsserviceName: "sts-nginx"replicas: 3 # by default is 1template:metadata:labels:app: sts-nginx # has to match .spec.selector.matchLabelsspec:terminationGracePeriodSeconds: 10containers:- name: sts-nginximage: nginxports:- containerPort: 80name: webvolumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumeClaimTemplates:- metadata:name: wwwspec:accessModes: [ "ReadWriteOnce" ]storageClassName: "rook-ceph-block"resources:requests:storage: 20Mi
---
apiVersion: v1
kind: Service
metadata:name: sts-nginxnamespace: default
spec:selector:app: sts-nginxtype: ClusterIPports:- name: sts-nginxport: 80targetPort: 80protocol: TCP
测试: 创建sts、修改nginx数据、删除sts、重新创建sts。他们的数据丢不丢,共享不共享
三、文件存储(CephFS)
1、配置
常用 文件存储。 RWX模式;如:10个Pod共同操作一个地方
参考文档:Ceph Docs
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:name: myfsnamespace: rook-ceph # namespace:cluster
spec:# The metadata pool spec. Must use replication.metadataPool:replicated:size: 3requireSafeReplicaSize: trueparameters:# Inline compression mode for the data pool# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compressioncompression_mode:none# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size#target_size_ratio: ".5"# The list of data pool specs. Can use replication or erasure coding.dataPools:- failureDomain: hostreplicated:size: 3# Disallow setting pool with replica 1, this could lead to data loss without recovery.# Make sure you're *ABSOLUTELY CERTAIN* that is what you wantrequireSafeReplicaSize: trueparameters:# Inline compression mode for the data pool# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compressioncompression_mode:none# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size#target_size_ratio: ".5"# Whether to preserve filesystem after CephFilesystem CRD deletionpreserveFilesystemOnDelete: true# The metadata service (mds) configurationmetadataServer:# The number of active MDS instancesactiveCount: 1# Whether each active MDS instance will have an active standby with a warm metadata cache for faster failover.# If false, standbys will be available, but will not have a warm cache.activeStandby: true# The affinity rules to apply to the mds deploymentplacement:# nodeAffinity:# requiredDuringSchedulingIgnoredDuringExecution:# nodeSelectorTerms:# - matchExpressions:# - key: role# operator: In# values:# - mds-node# topologySpreadConstraints:# tolerations:# - key: mds-node# operator: Exists# podAffinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- rook-ceph-mds# topologyKey: kubernetes.io/hostname will place MDS across different hoststopologyKey: kubernetes.io/hostnamepreferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: appoperator: Invalues:- rook-ceph-mds# topologyKey: */zone can be used to spread MDS across different AZ# Use <topologyKey: failure-domain.beta.kubernetes.io/zone> in k8s cluster if your cluster is v1.16 or lower# Use <topologyKey: topology.kubernetes.io/zone> in k8s cluster is v1.17 or uppertopologyKey: topology.kubernetes.io/zone# A key/value list of annotationsannotations:# key: value# A key/value list of labelslabels:# key: valueresources:# The requests and limits set here, allow the filesystem MDS Pod(s) to use half of one CPU core and 1 gigabyte of memory# limits:# cpu: "500m"# memory: "1024Mi"# requests:# cpu: "500m"# memory: "1024Mi"# priorityClassName: my-priority-classmirroring:enabled: false
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: rook-cephfs
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:# clusterID is the namespace where operator is deployed.clusterID: rook-ceph# CephFS filesystem name into which the volume shall be createdfsName: myfs# Ceph pool into which the volume shall be created# Required for provisionVolume: "true"pool: myfs-data0# The secrets contain Ceph admin credentials. These are generated automatically by the operator# in the same namespace as the cluster.csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisionercsi.storage.k8s.io/provisioner-secret-namespace: rook-cephcsi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisionercsi.storage.k8s.io/controller-expand-secret-namespace: rook-cephcsi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-nodecsi.storage.k8s.io/node-stage-secret-namespace: rook-cephreclaimPolicy: Delete
allowVolumeExpansion: true
2、测试
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploynamespace: defaultlabels:app: nginx-deploy
spec:selector:matchLabels:app: nginx-deployreplicas: 3strategy:rollingUpdate:maxSurge: 25%maxUnavailable: 25%type: RollingUpdatetemplate:metadata:labels:app: nginx-deployspec:containers:- name: nginx-deployimage: nginxvolumeMounts:- name: localtimemountPath: /etc/localtime- name: nginx-html-storagemountPath: /usr/share/nginx/htmlvolumes:- name: localtimehostPath:path: /usr/share/zoneinfo/Asia/Shanghai- name: nginx-html-storagepersistentVolumeClaim:claimName: nginx-pv-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: nginx-pv-claimlabels:app: nginx-deploy
spec:storageClassName: rook-cephfsaccessModes:- ReadWriteMany ##如果是ReadWriteOnce将会是什么效果resources:requests:storage: 10Mi
测试,创建deploy、修改页面、删除deploy,新建deploy是否绑定成功,数据是否在
四、pvc扩容
参照CSI(容器存储接口)文档:
卷扩容:Ceph Docs
动态卷扩容
# 之前创建storageclass的时候已经配置好了
# 测试:去容器挂载目录 curl -O 某个大文件 默认不能下载# 修改原来的PVC,可以扩充容器。
# 注意,只能扩容,不能缩容
有状态应用(3个副本)使用块存储。自己操作自己的pvc挂载的pv;也不丢失
无状态应用(3个副本)使用共享存储。很多人操作一个pvc挂载的一个pv;也不丢失
-
其他Pod可以对数据进行修改
-
MySQL 有状态做成主节点。。。MySQL - Master ---- pv
-
MySQL 无状态只读 挂载master的 pvc。
- 📢博客主页:https://lansonli.blog.csdn.net
- 📢欢迎点赞 👍 收藏 ⭐留言 📝 如有错误敬请指正!
- 📢本文由 Lansonli 原创,首发于 CSDN博客🙉
- 📢停下休息的时候不要忘了别人还在奔跑,希望大家抓紧时间学习,全力奔赴更美好的生活✨