云原生(三十四) | Kubernetes篇之平台存储系统实战

news/2024/11/17 22:40:00/

文章目录

Kubernetes平台存储系统实战

一、块存储(RDB)

1、配置

二、STS案例实战

三、文件存储(CephFS)

1、配置

2、测试

四、pvc扩容

动态卷扩容


Kubernetes平台存储系统实战

一、块存储(RDB)

RDB: RADOS Block Devices

RADOS: Reliable, Autonomic Distributed Object Store

不能是RWX模式

1、配置

RWO:(ReadWriteOnce)

参考文档:Ceph Docs

常用块存储 。RWO模式;STS删除,pvc不会删除,需要自己手动维护

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:name: replicapoolnamespace: rook-ceph
spec:failureDomain: host  #容灾模式,host或者osdreplicated:size: 2  #数据副本数量
---
apiVersion: storage.k8s.io/v1
kind: StorageClass  #存储驱动
metadata:name: rook-ceph-block
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:# clusterID is the namespace where the rook cluster is runningclusterID: rook-ceph# Ceph pool into which the RBD image shall be createdpool: replicapool# (optional) mapOptions is a comma-separated list of map options.# For krbd options refer# https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options# For nbd options refer# https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options# mapOptions: lock_on_read,queue_depth=1024# (optional) unmapOptions is a comma-separated list of unmap options.# For krbd options refer# https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options# For nbd options refer# https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options# unmapOptions: force# RBD image format. Defaults to "2".imageFormat: "2"# RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.imageFeatures: layering# The secrets contain Ceph admin credentials.csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisionercsi.storage.k8s.io/provisioner-secret-namespace: rook-cephcsi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisionercsi.storage.k8s.io/controller-expand-secret-namespace: rook-cephcsi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-nodecsi.storage.k8s.io/node-stage-secret-namespace: rook-ceph# Specify the filesystem type of the volume. If not specified, csi-provisioner# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock# in hyperconverged settings where the volume is mounted on the same node as the osds.csi.storage.k8s.io/fstype: ext4# Delete the rbd volume when a PVC is deleted
reclaimPolicy: Delete
allowVolumeExpansion: true

二、STS案例实战

apiVersion: apps/v1
kind: StatefulSet
metadata:name: sts-nginxnamespace: default
spec:selector:matchLabels:app: sts-nginx # has to match .spec.template.metadata.labelsserviceName: "sts-nginx"replicas: 3 # by default is 1template:metadata:labels:app: sts-nginx # has to match .spec.selector.matchLabelsspec:terminationGracePeriodSeconds: 10containers:- name: sts-nginximage: nginxports:- containerPort: 80name: webvolumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumeClaimTemplates:- metadata:name: wwwspec:accessModes: [ "ReadWriteOnce" ]storageClassName: "rook-ceph-block"resources:requests:storage: 20Mi
---
apiVersion: v1
kind: Service
metadata:name: sts-nginxnamespace: default
spec:selector:app: sts-nginxtype: ClusterIPports:- name: sts-nginxport: 80targetPort: 80protocol: TCP

测试: 创建sts、修改nginx数据、删除sts、重新创建sts。他们的数据丢不丢,共享不共享

三、文件存储(CephFS)

1、配置

常用 文件存储。 RWX模式;如:10个Pod共同操作一个地方

参考文档:Ceph Docs

 

apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:name: myfsnamespace: rook-ceph # namespace:cluster
spec:# The metadata pool spec. Must use replication.metadataPool:replicated:size: 3requireSafeReplicaSize: trueparameters:# Inline compression mode for the data pool# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compressioncompression_mode:none# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size#target_size_ratio: ".5"# The list of data pool specs. Can use replication or erasure coding.dataPools:- failureDomain: hostreplicated:size: 3# Disallow setting pool with replica 1, this could lead to data loss without recovery.# Make sure you're *ABSOLUTELY CERTAIN* that is what you wantrequireSafeReplicaSize: trueparameters:# Inline compression mode for the data pool# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compressioncompression_mode:none# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size#target_size_ratio: ".5"# Whether to preserve filesystem after CephFilesystem CRD deletionpreserveFilesystemOnDelete: true# The metadata service (mds) configurationmetadataServer:# The number of active MDS instancesactiveCount: 1# Whether each active MDS instance will have an active standby with a warm metadata cache for faster failover.# If false, standbys will be available, but will not have a warm cache.activeStandby: true# The affinity rules to apply to the mds deploymentplacement:#  nodeAffinity:#    requiredDuringSchedulingIgnoredDuringExecution:#      nodeSelectorTerms:#      - matchExpressions:#        - key: role#          operator: In#          values:#          - mds-node#  topologySpreadConstraints:#  tolerations:#  - key: mds-node#    operator: Exists#  podAffinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- rook-ceph-mds# topologyKey: kubernetes.io/hostname will place MDS across different hoststopologyKey: kubernetes.io/hostnamepreferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: appoperator: Invalues:- rook-ceph-mds# topologyKey: */zone can be used to spread MDS across different AZ# Use <topologyKey: failure-domain.beta.kubernetes.io/zone> in k8s cluster if your cluster is v1.16 or lower# Use <topologyKey: topology.kubernetes.io/zone>  in k8s cluster is v1.17 or uppertopologyKey: topology.kubernetes.io/zone# A key/value list of annotationsannotations:#  key: value# A key/value list of labelslabels:#  key: valueresources:# The requests and limits set here, allow the filesystem MDS Pod(s) to use half of one CPU core and 1 gigabyte of memory#  limits:#    cpu: "500m"#    memory: "1024Mi"#  requests:#    cpu: "500m"#    memory: "1024Mi"# priorityClassName: my-priority-classmirroring:enabled: false

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: rook-cephfs
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:# clusterID is the namespace where operator is deployed.clusterID: rook-ceph# CephFS filesystem name into which the volume shall be createdfsName: myfs# Ceph pool into which the volume shall be created# Required for provisionVolume: "true"pool: myfs-data0# The secrets contain Ceph admin credentials. These are generated automatically by the operator# in the same namespace as the cluster.csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisionercsi.storage.k8s.io/provisioner-secret-namespace: rook-cephcsi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisionercsi.storage.k8s.io/controller-expand-secret-namespace: rook-cephcsi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-nodecsi.storage.k8s.io/node-stage-secret-namespace: rook-cephreclaimPolicy: Delete
allowVolumeExpansion: true

 

2、测试

apiVersion: apps/v1
kind: Deployment
metadata:name:  nginx-deploynamespace: defaultlabels:app:  nginx-deploy
spec:selector:matchLabels:app: nginx-deployreplicas: 3strategy:rollingUpdate:maxSurge: 25%maxUnavailable: 25%type: RollingUpdatetemplate:metadata:labels:app:  nginx-deployspec:containers:- name:  nginx-deployimage:  nginxvolumeMounts:- name: localtimemountPath: /etc/localtime- name: nginx-html-storagemountPath: /usr/share/nginx/htmlvolumes:- name: localtimehostPath:path: /usr/share/zoneinfo/Asia/Shanghai- name: nginx-html-storagepersistentVolumeClaim:claimName: nginx-pv-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: nginx-pv-claimlabels:app:  nginx-deploy
spec:storageClassName: rook-cephfsaccessModes:- ReadWriteMany  ##如果是ReadWriteOnce将会是什么效果resources:requests:storage: 10Mi

测试,创建deploy、修改页面、删除deploy,新建deploy是否绑定成功,数据是否在

四、pvc扩容

参照CSI(容器存储接口)文档:

卷扩容:Ceph Docs

动态卷扩容

# 之前创建storageclass的时候已经配置好了
# 测试:去容器挂载目录  curl -O 某个大文件  默认不能下载

# 修改原来的PVC,可以扩充容器。

# 注意,只能扩容,不能缩容

有状态应用(3个副本)使用块存储。自己操作自己的pvc挂载的pv;也不丢失

无状态应用(3个副本)使用共享存储。很多人操作一个pvc挂载的一个pv;也不丢失

  • 其他Pod可以对数据进行修改

  • MySQL 有状态做成主节点。。。MySQL - Master ---- pv

  • MySQL 无状态只读 挂载master的 pvc。


  • 📢博客主页:https://lansonli.blog.csdn.net
  • 📢欢迎点赞 👍 收藏 ⭐留言 📝 如有错误敬请指正!
  • 📢本文由 Lansonli 原创,首发于 CSDN博客🙉
  • 📢停下休息的时候不要忘了别人还在奔跑,希望大家抓紧时间学习,全力奔赴更美好的生活✨ 

http://www.ppmy.cn/news/607489.html

相关文章

地平线 征程® 3

地平线 征程 3 新一代高性能车规级 AI 芯片 征程3 是地平线基于自研的BPU2.0 架构&#xff0c;针对高级别辅助驾驶场景推出的新一代高效能车规级 AI 芯片&#xff0c;已通过 AEC-Q100 认证。征程3 不仅支持基于深度学习的图像检测、分类、像素级分割等功能&#xff1b;也支持对…

大数据必学Java基础(二十九):二维数组

文章目录 二维数组 一、引入:本质上全部都是一维数组 二、基本代码

iphone smtp服务器没有响应,电子邮件卡在iPhone或iPad上的发件箱?如何修复iOS中的未发送邮件 | MOS86...

您曾经在iOS中发送电子邮件&#xff0c;只能将信息卡在iPhone&#xff0c;iPad或iPod touch的邮件应用发件箱中&#xff1f;你知道这是什么时候发生的&#xff0c;因为在iOS的Mail应用程序的底部&#xff0c;状态栏在iOS中显示1个未发送的消息&#xff0c;或者甚至更多的未发送…

工艺技术:14nm与28nm工艺

工艺技术&#xff1a;14nm与28nm工艺 中芯国际&#xff0c;用成熟可靠的工艺技术实现日趋精细复杂的芯片设计&#xff0c;从而让产品在具备更高性能和更低功耗的同时&#xff0c;实现芯片尺寸的优化。 为了满足全球客户的不同需求&#xff0c;提供0.35微米到14纳米制程工艺设计…

神经机器翻译(NMT)开源工具

博客地址&#xff1a;http://blog.csdn.net/wangxinginnlp/article/details/52944432 工具名称&#xff1a;T2T: Tensor2Tensor Transformers 地址&#xff1a;https://github.com/tensorflow/tensor2tensor 语言&#xff1a;Python/Tensorflow 简介&#xff1a;★★★★★ 五颗…

[JavaScript] 好用的 JavaScript Symbol 类型

初识Symbol 什么是Symbol&#xff1f;可以理解它为一个标识&#xff0c;一般情况下用来解决重名问题。 let hd Symbol(hello,dust.) let edu Symbol(good afternoon) console.log(hd edu) //false一个Symbol要反复使用可以用Symbol.for定义 let a Symbol.for(good morni…

数据湖(十七):Flink与Iceberg整合DataStream API操作

文章目录 Flink与Iceberg整合DataStream API操作 一、DataStream API 实时写入Iceberg表 1、首先在Maven中导入以下依赖

国民技术芯片相关产业研发

国民技术芯片相关产业研发 国民技术股份有限公司——塑造网络社会信息安全DNA,以原始创新技术满足人们安全便捷的网络生活需求。2000年公司成立,是承担国家“909”超大规模集成电路专项工程的集成电路设计企业之一。2010年4月在深圳创业板上市,股票代码300077,是中国上市公司协…