Kubernetes实战——集群监控和可视化管理

server/2024/9/22 13:51:21/

目录

一、Kube-Prometheus

1、版本兼容性介绍

prometheus-toc" style="margin-left:40px;">2、安装 kube-prometheus

3、安装Ingress,实现访问

二、K8s安装ELK日志收集

1、安装Elasticsearch

2、安装Logstash

3、安装Filebeat

4、安装Kibina

三、Dashboard安装与使用

1、安装

 2、创建token

3、使用 

四、kubesphere安装与使用

1、官方文档     

2、准备工作

2.1、Default StorageClass安装

3、安装

 4、卸载

 5、使用

5.1、主界面

5.2、可插拔组件 

 5.3、平台管理

5.4、访问控制

 5.5、通知设置

五、附录

prometheus%E6%BA%90%E7%A0%81-toc" style="margin-left:40px;">1、kube-prometheus源码

prometheus%E6%89%80%E9%9C%80%E9%95%9C%E5%83%8F-toc" style="margin-left:40px;">2、kube-prometheus所需镜像

3、Dashboard配置文件

4、openebs-operator.yaml

5、default-storage-class.yaml

6、修改镜像名称

六、参考 


一、Kube-Prometheus

1、版本兼容性介绍

kube-prometheus 堆栈Kubernetes 1.23Kubernetes 1.24Kubernetes 1.25Kubernetes 1.26Kubernetes 1.27Kubernetes 1.28Kubernetes 1.29Kubernetes 1.30Kubernetes 1.31
release-0.11
release-0.12
release-0.13
release-0.14
main

prometheus">2、安装 kube-prometheus

#下载资源 下载不下来的见附录
wget https://github.com/prometheus-operator/kube-prometheus/archive/refs/tags/v0.11.0.tar.gz#解压文件
tar -xf v0.11.0.tar.gz #重命名
mv kube-prometheus-0.11.0 kube-prometheus#进入目录
cd kube-prometheus/manifests#创建setup
kubectl create -f setup/#创建kube-prometheus资源
kubectl apply -f ../manifests/#查看资源
kubectl get all -n monitoring

注意:

  • 有些镜像可能在国内拉取不下来,我会把镜像打包放到附录,直接导入镜像即可 。
  • 安装需要足够的空间,请清理掉没用的的Pod,或者增大虚拟机的内存。        
#导入镜像 
docker load -i <压缩包名称>

3、安装Ingress,实现访问

apiVersion: networking.k8s.io/v1
kind: Ingress  #类型为ingress
metadata:name: prometheus-ingress #ingress名称namespace: monitoring  #命名空间annotations:nginx.ingress.kubernetes.io/rewrite-target: /
spec:ingressClassName: nginxrules:- host: grafana.wssnail.com  #grafana访问的域名http:paths:- pathType: Prefix #类型path: /backend:service:name: grafana  #需要与grafana-service.yml中配置的服务名保持一致port:number: 3000 #需要与grafana-service.yml中配置的端口持一致- host: prometheus.wssnail.com  #prometheus访问的域名http:paths:- pathType: Prefixpath: /backend:service:name: prometheus-k8s #需要与prometheus-service.yaml中配置的服务名称保持一致port:number: 9090  #需要与prometheus-service.yaml中配置的端口称保持一致- host: alertmanager.wssnail.com  #alertmanager访问的域名http:paths:- pathType: Prefixpath: /backend:service:name: alertmanager-main  #需要与alertmanager-service.yaml配置的服务名保持一致port:number: 9093  #需要与alertmanager-service.yaml配置的端口保持一致
#创建资源,配置hosts
kubectl create -f prometheus-ingress.yaml #配置host,其中ip为ingress-nginx的Pod所在的节点ip
192.168.139.207  grafana.wssnail.com
192.168.139.207  prometheus.wssnail.com
192.168.139.207  alertmanager.wssnail.com

二、K8s安装ELK日志收集

1、安装Elasticsearch

apiVersion: v1
kind: Namespace
metadata:name: kube-logging
--- 
apiVersion: v1 
kind: Service 
metadata: name: elasticsearch-logging namespace: kube-logging labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "Elasticsearch" 
spec: ports: - port: 9200 protocol: TCP targetPort: db selector: k8s-app: elasticsearch-logging 
--- 
# RBAC authn and authz 
apiVersion: v1 
kind: ServiceAccount 
metadata: name: elasticsearch-logging namespace: kube-logging labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile 
--- 
kind: ClusterRole 
apiVersion: rbac.authorization.k8s.io/v1 
metadata: name: elasticsearch-logging labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile 
rules: 
- apiGroups: - "" resources: - "services" - "namespaces" - "endpoints" verbs: - "get" 
--- 
kind: ClusterRoleBinding 
apiVersion: rbac.authorization.k8s.io/v1 
metadata: namespace: kube-logging name: elasticsearch-logging labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile 
subjects: 
- kind: ServiceAccount name: elasticsearch-logging namespace: kube-logging apiGroup: "" 
roleRef: kind: ClusterRole name: elasticsearch-logging apiGroup: "" 
--- 
# Elasticsearch deployment itself 
apiVersion: apps/v1 
kind: StatefulSet #使用statefulset创建Pod 
metadata: name: elasticsearch-logging #pod名称,使用statefulSet创建的Pod是有序号有顺序的 namespace: kube-logging  #命名空间 labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile srv: srv-elasticsearch 
spec: serviceName: elasticsearch-logging #与svc相关联,这可以确保使用以下DNS地址访问Statefulset中的每个pod (es-cluster-[0,1,2].elasticsearch.elk.svc.cluster.local) replicas: 1 #副本数量,单节点 selector: matchLabels: k8s-app: elasticsearch-logging #和pod template配置的labels相匹配 template: metadata: labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" spec: serviceAccountName: elasticsearch-logging containers: - image: docker.io/library/elasticsearch:7.9.3 name: elasticsearch-logging resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m memory: 2Gi requests: cpu: 100m memory: 500Mi ports: - containerPort: 9200 name: db protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: elasticsearch-logging mountPath: /usr/share/elasticsearch/data/   #挂载点 env: - name: "NAMESPACE" valueFrom: fieldRef: fieldPath: metadata.namespace - name: "discovery.type"  #定义单节点类型 value: "single-node" - name: ES_JAVA_OPTS #设置Java的内存参数,可以适当进行加大调整 value: "-Xms512m -Xmx2g"  volumes: - name: elasticsearch-logging hostPath: path: /data/es/nodeSelector: #如果需要匹配落盘节点可以添加 nodeSelect es: data tolerations: - effect: NoSchedule operator: Exists # Elasticsearch requires vm.max_map_count to be at least 262144. # If your OS already sets up this number to a higher value, feel free # to remove this init container. initContainers: #容器初始化前的操作 - name: elasticsearch-logging-init image: alpine:3.6 command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"] #添加mmap计数限制,太低可能造成内存不足的错误 securityContext:  #仅应用到指定的容器上,并且不会影响Volume privileged: true #运行特权容器 - name: increase-fd-ulimit image: busybox imagePullPolicy: IfNotPresent command: ["sh", "-c", "ulimit -n 65536"] #修改文件描述符最大数量 securityContext: privileged: true - name: elasticsearch-volume-init #es数据落盘初始化,加上777权限 image: alpine:3.6 command: - chmod - -R - "777" - /usr/share/elasticsearch/data/ volumeMounts: - name: elasticsearch-logging mountPath: /usr/share/elasticsearch/data/
#给Node1节点打标签
kubectl label no node1 es=data#创建资源
kubectl create -f es.yaml #查看资源
kubectl get po -n kube-logging

2、安装Logstash

--- 
apiVersion: v1 
kind: Service 
metadata: name: logstash namespace: kube-logging 
spec: ports: - port: 5044 targetPort: beats selector: type: logstash clusterIP: None 
--- 
apiVersion: apps/v1 
kind: Deployment 
metadata: name: logstash namespace: kube-logging 
spec: selector: matchLabels: type: logstash template: metadata: labels: type: logstash srv: srv-logstash spec: containers: - image: docker.io/kubeimages/logstash:7.9.3 #该镜像支持arm64和amd64两种架构 name: logstash ports: - containerPort: 5044 name: beats command: - logstash - '-f' - '/etc/logstash_c/logstash.conf' env: - name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS" value: "http://elasticsearch-logging:9200"  #这里配置的是es的服务名称,如果是夸命名空间需要加上命名空间volumeMounts: - name: config-volume mountPath: /etc/logstash_c/ - name: config-yml-volume mountPath: /usr/share/logstash/config/ - name: timezone mountPath: /etc/localtime resources: #logstash一定要加上资源限制,避免对其他业务造成资源抢占影响 limits: cpu: 1000m memory: 2048Mi requests: cpu: 512m memory: 512Mi volumes: - name: config-volume configMap: name: logstash-conf items: - key: logstash.conf path: logstash.conf - name: timezone hostPath: path: /etc/localtime - name: config-yml-volume configMap: name: logstash-yml items: - key: logstash.yml path: logstash.yml --- 
apiVersion: v1 
kind: ConfigMap 
metadata: name: logstash-conf namespace: kube-logging labels: type: logstash 
data: logstash.conf: |- input {beats { port => 5044 } } filter {# 处理 ingress 日志 if [kubernetes][container][name] == "nginx-ingress-controller" {json {source => "message" target => "ingress_log" }if [ingress_log][requesttime] { mutate { convert => ["[ingress_log][requesttime]", "float"] }}if [ingress_log][upstremtime] { mutate { convert => ["[ingress_log][upstremtime]", "float"] }} if [ingress_log][status] { mutate { convert => ["[ingress_log][status]", "float"] }}if  [ingress_log][httphost] and [ingress_log][uri] {mutate { add_field => {"[ingress_log][entry]" => "%{[ingress_log][httphost]}%{[ingress_log][uri]}"} } mutate { split => ["[ingress_log][entry]","/"] } if [ingress_log][entry][1] { mutate { add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/%{[ingress_log][entry][1]}"} remove_field => "[ingress_log][entry]" }} else { mutate { add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/"} remove_field => "[ingress_log][entry]" }}}}# 处理以srv进行开头的业务服务日志 if [kubernetes][container][name] =~ /^srv*/ { json { source => "message" target => "tmp" } if [kubernetes][namespace] == "kube-logging" { drop{} } if [tmp][level] { mutate{ add_field => {"[applog][level]" => "%{[tmp][level]}"} } if [applog][level] == "debug"{ drop{} } } if [tmp][msg] { mutate { add_field => {"[applog][msg]" => "%{[tmp][msg]}"} } } if [tmp][func] { mutate { add_field => {"[applog][func]" => "%{[tmp][func]}"} } } if [tmp][cost]{ if "ms" in [tmp][cost] { mutate { split => ["[tmp][cost]","m"] add_field => {"[applog][cost]" => "%{[tmp][cost][0]}"} convert => ["[applog][cost]", "float"] } } else { mutate { add_field => {"[applog][cost]" => "%{[tmp][cost]}"} }}}if [tmp][method] { mutate { add_field => {"[applog][method]" => "%{[tmp][method]}"} }}if [tmp][request_url] { mutate { add_field => {"[applog][request_url]" => "%{[tmp][request_url]}"} } }if [tmp][meta._id] { mutate { add_field => {"[applog][traceId]" => "%{[tmp][meta._id]}"} } } if [tmp][project] { mutate { add_field => {"[applog][project]" => "%{[tmp][project]}"} }}if [tmp][time] { mutate { add_field => {"[applog][time]" => "%{[tmp][time]}"} }}if [tmp][status] { mutate { add_field => {"[applog][status]" => "%{[tmp][status]}"} convert => ["[applog][status]", "float"] }}}mutate { rename => ["kubernetes", "k8s"] remove_field => "beat" remove_field => "tmp" remove_field => "[k8s][labels][app]" }}output { elasticsearch { hosts => ["http://elasticsearch-logging:9200"] codec => json index => "logstash-%{+YYYY.MM.dd}" #索引名称以logstash+日志进行每日新建 }} 
---apiVersion: v1 
kind: ConfigMap 
metadata: name: logstash-yml namespace: kube-logging labels: type: logstash 
data: logstash.yml: |- http.host: "0.0.0.0" xpack.monitoring.elasticsearch.hosts: http://elasticsearch-logging:9200
#创建资源
kubectl create -f logstash.yaml#查看资源
kubectl get po -n kube-logging 

3、安装Filebeat

--- 
apiVersion: v1 
kind: ConfigMap 
metadata: name: filebeat-config namespace: kube-logging labels: k8s-app: filebeat 
data: filebeat.yml: |- filebeat.inputs: - type: container enable: truepaths: - /var/log/containers/*.log #这里是filebeat采集挂载到pod中的日志目录 processors: - add_kubernetes_metadata: #添加k8s的字段用于后续的数据清洗 host: ${NODE_NAME}matchers: - logs_path: logs_path: "/var/log/containers/" #output.kafka:  #如果日志量较大,es中的日志有延迟,可以选择在filebeat和logstash中间加入kafka #  hosts: ["kafka-log-01:9092", "kafka-log-02:9092", "kafka-log-03:9092"] # topic: 'topic-test-log' #  version: 2.0.0 output.logstash: #因为还需要部署logstash进行数据的清洗,因此filebeat是把数据推到logstash中 hosts: ["logstash:5044"] enabled: true 
--- 
apiVersion: v1 
kind: ServiceAccount 
metadata: name: filebeat namespace: kube-logging labels: k8s-app: filebeat
--- 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole 
metadata: name: filebeat labels: k8s-app: filebeat 
rules: 
- apiGroups: [""] # "" indicates the core API group resources: - namespaces - pods verbs: ["get", "watch", "list"] 
--- 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: name: filebeat 
subjects: 
- kind: ServiceAccount name: filebeat namespace: kube-logging 
roleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io 
--- 
apiVersion: apps/v1 
kind: DaemonSet 
metadata: name: filebeat namespace: kube-logging labels: k8s-app: filebeat 
spec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 containers: - name: filebeat image: docker.io/kubeimages/filebeat:7.9.3 #该镜像支持arm64和amd64两种架构 args: [ "-c", "/etc/filebeat.yml", "-e","-httpprof","0.0.0.0:6060" ] #ports: #  - containerPort: 6060 #    hostPort: 6068 env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: ELASTICSEARCH_HOST value: elasticsearch-logging - name: ELASTICSEARCH_PORT value: "9200" securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true resources: limits: memory: 1000Mi cpu: 1000m requests: memory: 100Mi cpu: 100m volumeMounts: - name: config #挂载的是filebeat的配置文件 mountPath: /etc/filebeat.yml readOnly: true subPath: filebeat.yml - name: data #持久化filebeat数据到宿主机上 mountPath: /usr/share/filebeat/data - name: varlibdockercontainers #这里主要是把宿主机上的源日志目录挂载到filebeat容器中,如果没有修改docker或者containerd的runtime进行了标准的日志落盘路径,可以把mountPath改为/var/lib mountPath: /var/libreadOnly: true - name: varlog #这里主要是把宿主机上/var/log/pods和/var/log/containers的软链接挂载到filebeat容器中 mountPath: /var/log/ readOnly: true - name: timezone mountPath: /etc/localtime volumes: - name: config configMap: defaultMode: 0600 name: filebeat-config - name: varlibdockercontainers hostPath: #如果没有修改docker或者containerd的runtime进行了标准的日志落盘路径,可以把path改为/var/lib path: /var/lib- name: varlog hostPath: path: /var/log/ # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - name: inputs configMap: defaultMode: 0600 name: filebeat-inputs - name: data hostPath: path: /data/filebeat-data type: DirectoryOrCreate - name: timezone hostPath: path: /etc/localtime tolerations: #加入容忍能够调度到每一个节点 - effect: NoExecute key: dedicated operator: Equal value: gpu - effect: NoSchedule operator: Exists 
#创建资源
kubectl create -f filebeat.yaml#查看资源
kubectl get po -n kube-logging

4、安装Kibina

---
apiVersion: v1
kind: ConfigMap
metadata:namespace: kube-loggingname: kibana-configlabels:k8s-app: kibana
data:kibana.yml: |-server.name: kibanaserver.host: "0"i18n.locale: zh-CN                      #设置默认语言为中文elasticsearch:hosts: ${ELASTICSEARCH_HOSTS}         #es集群连接地址,由于我这都都是k8s部署且在一个ns下,可以直接使用service name连接
--- 
apiVersion: v1 
kind: Service 
metadata: name: kibana namespace: kube-logging labels: k8s-app: kibana kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "Kibana" srv: srv-kibana 
spec: type: NodePortports: - port: 5601 protocol: TCP targetPort: ui selector: k8s-app: kibana 
--- 
apiVersion: apps/v1 
kind: Deployment 
metadata: name: kibana namespace: kube-logging labels: k8s-app: kibana kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile srv: srv-kibana 
spec: replicas: 1 selector: matchLabels: k8s-app: kibana template: metadata: labels: k8s-app: kibana spec: containers: - name: kibana image: docker.io/kubeimages/kibana:7.9.3 #该镜像支持arm64和amd64两种架构 resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m requests: cpu: 100m env: - name: ELASTICSEARCH_HOSTS value: http://elasticsearch-logging:9200 ports: - containerPort: 5601 name: ui protocol: TCP volumeMounts:- name: configmountPath: /usr/share/kibana/config/kibana.ymlreadOnly: truesubPath: kibana.ymlvolumes:- name: configconfigMap:name: kibana-config#此处配置个ingress 避免通过外网访问不到的情况下处理
--- 
apiVersion: networking.k8s.io/v1
kind: Ingress 
metadata: name: kibana namespace: kube-logging 
spec: ingressClassName: nginxrules: - host: kibana.wssnail.comhttp: paths: - path: / pathType: Prefixbackend: service:name: kibana port:number: 5601 
#创建资源kubectl create -f kibana.yaml#查看资源kubectl get all -n kube-logging 

 

 

三、Dashboard安装与使用

1、安装

#下载资源文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
#修改service类型为type: NodePort 完整文件见附录#查看资源
kubectl get all -n kubernetes-dashboard

 2、创建token

apiVersion: v1 
kind: ServiceAccount 
metadata: labels: k8s-app: kubernetes-dashboard name: dashboard-admin namespace: kubernetes-dashboard 
--- 
apiVersion: rbac.authorization.k8s.io/v1 
kind: ClusterRoleBinding 
metadata: name: dashboard-admin-cluster-role 
roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin 
subjects: - kind: ServiceAccountname: dashboard-adminnamespace: kubernetes-dashboard
#创建资源
kubectl create -f dashboard-admin.yaml#查看账户信息
kubectl describe serviceaccount dashboard-admin -n kubernetes-dashboard#查看token
kubectl describe secrets dashboard-admin-token-krndx -n kubernetes-dashboard  

 注意:这里访问需要使用https 

3、使用 

四、kubesphere安装与使用

1、官方文档     

https://kubesphere.io/zh/docs/v3.4

2、准备工作

  • 您的 Kubernetes 版本必须为:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。
  • 确保您的机器满足最低硬件要求:CPU > 1 核,内存 > 2 GB。
  • 在安装之前,需要配置 Kubernetes 集群中的默认存储类型。

2.1、Default StorageClass安装

# 安装 iSCSI 协议客户端(OpenEBS 需要该协议提供存储支持) 所有节点都执行
yum install iscsi-initiator-utils -y# 设置开机启动
systemctl enable --now iscsid# 启动服务
systemctl start iscsid# 查看服务状态
systemctl status iscsid# 安装 OpenEBS 
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml# 查看状态(下载镜像可能需要一些时间)
kubectl get all -n openebs# 在主节点创建本地 storage class
kubectl apply -f default-storage-class.yaml

附录 openebs-operator.yaml

附录 default-storage-class.yaml

3、安装

 kubesphere-installer.yaml

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:name: clusterconfigurations.installer.kubesphere.io
spec:group: installer.kubesphere.ioversions:- name: v1alpha1served: truestorage: trueschema:openAPIV3Schema:type: objectproperties:spec:type: objectx-kubernetes-preserve-unknown-fields: truestatus:type: objectx-kubernetes-preserve-unknown-fields: truescope: Namespacednames:plural: clusterconfigurationssingular: clusterconfigurationkind: ClusterConfigurationshortNames:- cc---
apiVersion: v1
kind: Namespace
metadata:name: kubesphere-system---
apiVersion: v1
kind: ServiceAccount
metadata:name: ks-installernamespace: kubesphere-system---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: ks-installer
rules:
- apiGroups:- ""resources:- '*'verbs:- '*'
- apiGroups:- appsresources:- '*'verbs:- '*'
- apiGroups:- extensionsresources:- '*'verbs:- '*'
- apiGroups:- batchresources:- '*'verbs:- '*'
- apiGroups:- rbac.authorization.k8s.ioresources:- '*'verbs:- '*'
- apiGroups:- apiregistration.k8s.ioresources:- '*'verbs:- '*'
- apiGroups:- apiextensions.k8s.ioresources:- '*'verbs:- '*'
- apiGroups:- tenant.kubesphere.ioresources:- '*'verbs:- '*'
- apiGroups:- certificates.k8s.ioresources:- '*'verbs:- '*'
- apiGroups:- devops.kubesphere.ioresources:- '*'verbs:- '*'
- apiGroups:- monitoring.coreos.comresources:- '*'verbs:- '*'
- apiGroups:- logging.kubesphere.ioresources:- '*'verbs:- '*'
- apiGroups:- jaegertracing.ioresources:- '*'verbs:- '*'
- apiGroups:- storage.k8s.ioresources:- '*'verbs:- '*'
- apiGroups:- admissionregistration.k8s.ioresources:- '*'verbs:- '*'
- apiGroups:- policyresources:- '*'verbs:- '*'
- apiGroups:- autoscalingresources:- '*'verbs:- '*'
- apiGroups:- networking.istio.ioresources:- '*'verbs:- '*'
- apiGroups:- config.istio.ioresources:- '*'verbs:- '*'
- apiGroups:- iam.kubesphere.ioresources:- '*'verbs:- '*'
- apiGroups:- notification.kubesphere.ioresources:- '*'verbs:- '*'
- apiGroups:- auditing.kubesphere.ioresources:- '*'verbs:- '*'
- apiGroups:- events.kubesphere.ioresources:- '*'verbs:- '*'
- apiGroups:- core.kubefed.ioresources:- '*'verbs:- '*'
- apiGroups:- installer.kubesphere.ioresources:- '*'verbs:- '*'
- apiGroups:- storage.kubesphere.ioresources:- '*'verbs:- '*'
- apiGroups:- security.istio.ioresources:- '*'verbs:- '*'
- apiGroups:- monitoring.kiali.ioresources:- '*'verbs:- '*'
- apiGroups:- kiali.ioresources:- '*'verbs:- '*'
- apiGroups:- networking.k8s.ioresources:- '*'verbs:- '*'
- apiGroups:- edgeruntime.kubesphere.ioresources:- '*'verbs:- '*'
- apiGroups:- types.kubefed.ioresources:- '*'verbs:- '*'
- apiGroups:- monitoring.kubesphere.ioresources:- '*'verbs:- '*'
- apiGroups:- application.kubesphere.ioresources:- '*'verbs:- '*'
- apiGroups:- alerting.kubesphere.ioresources:- '*'verbs:- '*'---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: ks-installer
subjects:
- kind: ServiceAccountname: ks-installernamespace: kubesphere-system
roleRef:kind: ClusterRolename: ks-installerapiGroup: rbac.authorization.k8s.io---
apiVersion: apps/v1
kind: Deployment
metadata:name: ks-installernamespace: kubesphere-systemlabels:app: ks-installer
spec:replicas: 1selector:matchLabels:app: ks-installertemplate:metadata:labels:app: ks-installerspec:serviceAccountName: ks-installercontainers:- name: installerimage: kubesphere/ks-installer:v3.4.1imagePullPolicy: "IfNotPresent"resources:limits:cpu: "1"memory: 1Girequests:cpu: 20mmemory: 100MivolumeMounts:- mountPath: /etc/localtimename: host-timereadOnly: truevolumes:- hostPath:path: /etc/localtimetype: ""name: host-time

cluster-configuration.yaml 

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:name: ks-installernamespace: kubesphere-systemlabels:version: v3.4.1
spec:persistence:storageClass: ""        # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.authentication:# adminPassword: ""     # Custom password of the admin user. If the parameter exists but the value is empty, a random password is generated. If the parameter does not exist, P@88w0rd is used.jwtSecret: ""           # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.local_registry: ""        # Add your private registry address if it is needed.
#  dev_tag: ""               # Add your kubesphere image tag you want to install, by default it's same as ks-installer release version.etcd:monitoring: false       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.endpointIps: localhost  # etcd cluster EndpointIps. It can be a bunch of IPs here.port: 2379              # etcd port.tlsEnable: truecommon:core:console:enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.port: 30880type: NodePort# apiserver:            # Enlarge the apiserver and controller manager's resource requests and limits for the large cluster#  resources: {}# controllerManager:#  resources: {}redis:enabled: falseenableHA: falsevolumeSize: 2Gi # Redis PVC size.openldap:enabled: falsevolumeSize: 2Gi   # openldap PVC size.minio:volumeSize: 20Gi # Minio PVC size.monitoring:# type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.GPUMonitoring:     # Enable or disable the GPU-related metrics. If you enable this switch but have no GPU resources, Kubesphere will set it to zero.enabled: falsegpu:                 # Install GPUKinds. The default GPU kind is nvidia.com/gpu. Other GPU kinds can be added here according to your needs.kinds:- resourceName: "nvidia.com/gpu"resourceType: "GPU"default: truees:   # Storage backend for logging, events and auditing.# master:#   volumeSize: 4Gi  # The volume size of Elasticsearch master nodes.#   replicas: 1      # The total number of master nodes. Even numbers are not allowed.#   resources: {}# data:#   volumeSize: 20Gi  # The volume size of Elasticsearch data nodes.#   replicas: 1       # The total number of data nodes.#   resources: {}enabled: falselogMaxAge: 7             # Log retention time in built-in Elasticsearch. It is 7 days by default.elkPrefix: logstash      # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.basicAuth:enabled: falseusername: ""password: ""externalElasticsearchHost: ""externalElasticsearchPort: ""opensearch:   # Storage backend for logging, events and auditing.# master:#   volumeSize: 4Gi  # The volume size of Opensearch master nodes.#   replicas: 1      # The total number of master nodes. Even numbers are not allowed.#   resources: {}# data:#   volumeSize: 20Gi  # The volume size of Opensearch data nodes.#   replicas: 1       # The total number of data nodes.#   resources: {}enabled: truelogMaxAge: 7             # Log retention time in built-in Opensearch. It is 7 days by default.opensearchPrefix: whizard      # The string making up index names. The index name will be formatted as ks-<opensearchPrefix>-logging.basicAuth:enabled: trueusername: "admin"password: "admin"externalOpensearchHost: ""externalOpensearchPort: ""dashboard:enabled: falsealerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.enabled: false         # Enable or disable the KubeSphere Alerting System.# thanosruler:#   replicas: 1#   resources: {}auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.enabled: false         # Enable or disable the KubeSphere Auditing Log System.# operator:#   resources: {}# webhook:#   resources: {}devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.enabled: false         # Enable or disable the KubeSphere DevOps System.jenkinsCpuReq: 0.5jenkinsCpuLim: 1jenkinsMemoryReq: 4GijenkinsMemoryLim: 4Gi  # Recommend keep same as requests.memory.jenkinsVolumeSize: 16Gievents:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.enabled: false         # Enable or disable the KubeSphere Events System.# operator:#   resources: {}# exporter:#   resources: {}ruler:enabled: truereplicas: 2#   resources: {}logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.enabled: false         # Enable or disable the KubeSphere Logging System.logsidecar:enabled: truereplicas: 2# resources: {}metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).enabled: false                   # Enable or disable metrics-server.monitoring:storageClass: ""                 # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.node_exporter:port: 9100# resources: {}# kube_rbac_proxy:#   resources: {}# kube_state_metrics:#   resources: {}# prometheus:#   replicas: 1  # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.#   volumeSize: 20Gi  # Prometheus PVC size.#   resources: {}#   operator:#     resources: {}# alertmanager:#   replicas: 1          # AlertManager Replicas.#   resources: {}# notification_manager:#   resources: {}#   operator:#     resources: {}#   proxy:#     resources: {}gpu:                           # GPU monitoring-related plug-in installation.nvidia_dcgm_exporter:        # Ensure that gpu resources on your hosts can be used normally, otherwise this plug-in will not work properly.enabled: false             # Check whether the labels on the GPU hosts contain "nvidia.com/gpu.present=true" to ensure that the DCGM pod is scheduled to these nodes.# resources: {}multicluster:clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.network:networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).# Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.enabled: false # Enable or disable network policies.ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.type: none # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.store:enabled: false # Enable or disable the KubeSphere App Store.servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.enabled: false     # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).istio:  # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/components:ingressGateways:- name: istio-ingressgatewayenabled: falsecni:enabled: falseedgeruntime:          # Add edge nodes to your cluster and deploy workloads on edge nodes.enabled: falsekubeedge:        # kubeedge configurationsenabled: falsecloudCore:cloudHub:advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.- ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.service:cloudhubNodePort: "30000"cloudhubQuicNodePort: "30001"cloudhubHttpsNodePort: "30002"cloudstreamNodePort: "30003"tunnelNodePort: "30004"# resources: {}# hostNetWork: falseiptables-manager:enabled: true mode: "external"# resources: {}# edgeService:#   resources: {}gatekeeper:        # Provide admission policy and rule management, A validating (mutating TBA) webhook that enforces CRD-based policies executed by Open Policy Agent.enabled: false   # Enable or disable Gatekeeper.# controller_manager:#   resources: {}# audit:#   resources: {}terminal:# image: 'alpine:3.15' # There must be an nsenter program in the imagetimeout: 600         # Container timeout, if set to 0, no timeout will be used. The unit is seconds
#下载资源
wget https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/kubesphere-installer.yamlwget  https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/cluster-configuration.yaml#创建资源kubectl apply -f kubesphere-installer.yamlkubectl apply -f cluster-configuration.yaml 

查看日志,出现以下内容证明安装完成。

#查看日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f#查看资源
kubectl get all -n kubesphere-system 

注意:安装时间有点长,耐心等待

 4、卸载

当安装失败或者需要卸载时使用下面删除脚本

#!/usr/bin/env bashfunction delete_sure(){cat << eof
$(echo -e "\033[1;36mNote:\033[0m")Delete the KubeSphere cluster, including the module kubesphere-system kubesphere-devops-system kubesphere-monitoring-system kubesphere-logging-system openpitrix-system.
eofread -p "Please reconfirm that you want to delete the KubeSphere cluster.  (yes/no) " ans
while [[ "x"$ans != "xyes" && "x"$ans != "xno" ]]; doread -p "Please reconfirm that you want to delete the KubeSphere cluster.  (yes/no) " ans
doneif [[ "x"$ans == "xno" ]]; thenexit
fi
}delete_sure# delete ks-install
kubectl delete deploy ks-installer -n kubesphere-system 2>/dev/null# delete helm
for namespaces in kubesphere-system kubesphere-devops-system kubesphere-monitoring-system kubesphere-logging-system openpitrix-system kubesphere-monitoring-federated
dohelm list -n $namespaces | grep -v NAME | awk '{print $1}' | sort -u | xargs -r -L1 helm uninstall -n $namespaces 2>/dev/null
done# delete kubefed
kubectl get cc -n kubesphere-system ks-installer -o jsonpath="{.status.multicluster}" | grep enable
if [[ $? -eq 0 ]]; thenhelm uninstall -n kube-federation-system kubefed 2>/dev/null#kubectl delete ns kube-federation-system 2>/dev/null
fihelm uninstall -n kube-system snapshot-controller 2>/dev/null# delete kubesphere deployment
kubectl delete deployment -n kubesphere-system `kubectl get deployment -n kubesphere-system -o jsonpath="{.items[*].metadata.name}"` 2>/dev/null# delete monitor statefulset
kubectl delete prometheus -n kubesphere-monitoring-system k8s 2>/dev/null
kubectl delete statefulset -n kubesphere-monitoring-system `kubectl get statefulset -n kubesphere-monitoring-system -o jsonpath="{.items[*].metadata.name}"` 2>/dev/null
# delete grafana
kubectl delete deployment -n kubesphere-monitoring-system grafana 2>/dev/null
kubectl --no-headers=true get pvc -n kubesphere-monitoring-system -o custom-columns=:metadata.namespace,:metadata.name | grep -E kubesphere-monitoring-system | xargs -n2 kubectl delete pvc -n 2>/dev/null# delete pvc
pvcs="kubesphere-system|openpitrix-system|kubesphere-devops-system|kubesphere-logging-system"
kubectl --no-headers=true get pvc --all-namespaces -o custom-columns=:metadata.namespace,:metadata.name | grep -E $pvcs | xargs -n2 kubectl delete pvc -n 2>/dev/null# delete rolebindings
delete_role_bindings() {for rolebinding in `kubectl -n $1 get rolebindings -l iam.kubesphere.io/user-ref -o jsonpath="{.items[*].metadata.name}"`dokubectl -n $1 delete rolebinding $rolebinding 2>/dev/nulldone
}# delete roles
delete_roles() {kubectl -n $1 delete role admin 2>/dev/nullkubectl -n $1 delete role operator 2>/dev/nullkubectl -n $1 delete role viewer 2>/dev/nullfor role in `kubectl -n $1 get roles -l iam.kubesphere.io/role-template -o jsonpath="{.items[*].metadata.name}"`dokubectl -n $1 delete role $role 2>/dev/nulldone
}# remove useless labels and finalizers
for ns in `kubectl get ns -o jsonpath="{.items[*].metadata.name}"`
dokubectl label ns $ns kubesphere.io/workspace-kubectl label ns $ns kubesphere.io/namespace-kubectl patch ns $ns -p '{"metadata":{"finalizers":null,"ownerReferences":null}}'delete_role_bindings $nsdelete_roles $ns
done# delete clusters
for cluster in `kubectl get clusters -o jsonpath="{.items[*].metadata.name}"`
dokubectl patch cluster $cluster -p '{"metadata":{"finalizers":null}}' --type=merge
done
kubectl delete clusters --all 2>/dev/null# delete workspaces
for ws in `kubectl get workspaces -o jsonpath="{.items[*].metadata.name}"`
dokubectl patch workspace $ws -p '{"metadata":{"finalizers":null}}' --type=merge
done
kubectl delete workspaces --all 2>/dev/null# delete devopsprojects
for devopsproject in `kubectl get devopsprojects -o jsonpath="{.items[*].metadata.name}"`
dokubectl patch devopsprojects $devopsproject -p '{"metadata":{"finalizers":null}}' --type=merge
donefor pip in `kubectl get pipeline -A -o jsonpath="{.items[*].metadata.name}"`
dokubectl patch pipeline $pip -n `kubectl get pipeline -A | grep $pip | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
donefor s2ibinaries in `kubectl get s2ibinaries -A -o jsonpath="{.items[*].metadata.name}"`
dokubectl patch s2ibinaries $s2ibinaries -n `kubectl get s2ibinaries -A | grep $s2ibinaries | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
donefor s2ibuilders in `kubectl get s2ibuilders -A -o jsonpath="{.items[*].metadata.name}"`
dokubectl patch s2ibuilders $s2ibuilders -n `kubectl get s2ibuilders -A | grep $s2ibuilders | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
donefor s2ibuildertemplates in `kubectl get s2ibuildertemplates -A -o jsonpath="{.items[*].metadata.name}"`
dokubectl patch s2ibuildertemplates $s2ibuildertemplates -n `kubectl get s2ibuildertemplates -A | grep $s2ibuildertemplates | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
donefor s2iruns in `kubectl get s2iruns -A -o jsonpath="{.items[*].metadata.name}"`
dokubectl patch s2iruns $s2iruns -n `kubectl get s2iruns -A | grep $s2iruns | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
donekubectl delete devopsprojects --all 2>/dev/null# delete validatingwebhookconfigurations
for webhook in ks-events-admission-validate users.iam.kubesphere.io network.kubesphere.io validating-webhook-configuration
dokubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io $webhook 2>/dev/null
done# delete mutatingwebhookconfigurations
for webhook in ks-events-admission-mutate logsidecar-injector-admission-mutate mutating-webhook-configuration
dokubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io $webhook 2>/dev/null
done# delete users
for user in `kubectl get users -o jsonpath="{.items[*].metadata.name}"`
dokubectl patch user $user -p '{"metadata":{"finalizers":null}}' --type=merge
done
kubectl delete users --all 2>/dev/null# delete helm resources
for resource_type in `echo helmcategories helmapplications helmapplicationversions helmrepos helmreleases`; dofor resource_name in `kubectl get ${resource_type}.application.kubesphere.io -o jsonpath="{.items[*].metadata.name}"`; dokubectl patch ${resource_type}.application.kubesphere.io ${resource_name} -p '{"metadata":{"finalizers":null}}' --type=mergedonekubectl delete ${resource_type}.application.kubesphere.io --all 2>/dev/null
done# delete workspacetemplates
for workspacetemplate in `kubectl get workspacetemplates.tenant.kubesphere.io -o jsonpath="{.items[*].metadata.name}"`
dokubectl patch workspacetemplates.tenant.kubesphere.io $workspacetemplate -p '{"metadata":{"finalizers":null}}' --type=merge
done
kubectl delete workspacetemplates.tenant.kubesphere.io --all 2>/dev/null# delete federatednamespaces in namespace kubesphere-monitoring-federated
for resource in $(kubectl get federatednamespaces.types.kubefed.io -n kubesphere-monitoring-federated -oname); dokubectl patch "${resource}" -p '{"metadata":{"finalizers":null}}' --type=merge -n kubesphere-monitoring-federated
done# delete crds
for crd in `kubectl get crds -o jsonpath="{.items[*].metadata.name}"`
doif [[ $crd == *kubesphere.io ]]; then kubectl delete crd $crd 2>/dev/null; fi
done# delete relevance ns
for ns in kubesphere-alerting-system kubesphere-controls-system kubesphere-devops-system kubesphere-logging-system kubesphere-monitoring-system kubesphere-monitoring-federated openpitrix-system kubesphere-system
dokubectl delete ns $ns 2>/dev/null
done
#授权变为可执行文件
chmod 777 kubesphere-delete.sh 

 5、使用

5.1、主界面

5.2、可插拔组件 

 

 5.3、平台管理

5.4、访问控制

 5.5、通知设置

五、附录

prometheus%E6%BA%90%E7%A0%81">1、kube-prometheus源码

链接: https://pan.baidu.com/s/1fhWsAlQG5M3qjnhS1IMU3g?pwd=qxgr 提取码: qxgr 

prometheus%E6%89%80%E9%9C%80%E9%95%9C%E5%83%8F">2、kube-prometheus所需镜像

链接: https://pan.baidu.com/s/1TuGr5EW3U5Tj2egT1xEwtg?pwd=fj84 提取码: fj84 

3、Dashboard配置文件

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.apiVersion: v1
kind: Namespace
metadata:name: kubernetes-dashboard---apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:type: NodePortports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboard---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboard
type: Opaque---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard
type: Opaque
data:csrf: ""---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard
type: Opaque---kind: ConfigMap
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]verbs: ["get"]---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard
rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:securityContext:seccompProfile:type: RuntimeDefaultcontainers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.7.0imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30securityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service
apiVersion: v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperspec:securityContext:seccompProfile:type: RuntimeDefaultcontainers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.8ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}

4、openebs-operator.yaml

# This manifest deploys the OpenEBS control plane components, 
# with associated CRs & RBAC rules
# NOTE: On GKE, deploy the openebs-operator.yaml in admin context
#
# NOTE: The Jiva and cStor components previously included in the Operator File  
#  are now removed and it is recommended for users to use cStor and Jiva CSI operators. 
#  To upgrade your Jiva and cStor volumes to CSI, please checkout the documentation at:
#  https://github.com/openebs/upgrade
#
# To deploy the legacy Jiva and cStor:
# kubectl apply -f https://openebs.github.io/charts/legacy-openebs-operator.yaml
# 
# To deploy cStor CSI:
# kubectl apply -f https://openebs.github.io/charts/cstor-operator.yaml
#
# To deploy Jiva CSI:
# kubectl apply -f https://openebs.github.io/charts/jiva-operator.yaml
## Create the OpenEBS namespace
apiVersion: v1
kind: Namespace
metadata:name: openebs
---
# Create Maya Service Account
apiVersion: v1
kind: ServiceAccount
metadata:name: openebs-maya-operatornamespace: openebs
---
# Define Role that allows operations on K8s pods/deployments
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: openebs-maya-operator
rules:
- apiGroups: ["*"]resources: ["nodes", "nodes/proxy"]verbs: ["*"]
- apiGroups: ["*"]resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "deployments/finalizers", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"]verbs: ["*"]
- apiGroups: ["*"]resources: ["statefulsets", "daemonsets"]verbs: ["*"]
- apiGroups: ["*"]resources: ["resourcequotas", "limitranges"]verbs: ["list", "watch"]
- apiGroups: ["*"]resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "certificatesigningrequests"]verbs: ["list", "watch"]
- apiGroups: ["*"]resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]verbs: ["*"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]resources: ["volumesnapshots", "volumesnapshotdatas"]verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]resources: ["customresourcedefinitions"]verbs: [ "get", "list", "create", "update", "delete", "patch"]
- apiGroups: ["openebs.io"]resources: [ "*"]verbs: ["*" ]
- apiGroups: ["cstor.openebs.io"]resources: [ "*"]verbs: ["*" ]
- apiGroups: ["coordination.k8s.io"]resources: ["leases"]verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: ["admissionregistration.k8s.io"]resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"]verbs: ["get", "create", "list", "delete", "update", "patch"]
- nonResourceURLs: ["/metrics"]verbs: ["get"]
- apiGroups: ["*"]resources: ["poddisruptionbudgets"]verbs: ["get", "list", "create", "delete", "watch"]
- apiGroups: ["coordination.k8s.io"]resources: ["leases"]verbs: ["get", "create", "update"]
---
# Bind the Service Account with the Role Privileges.
# TODO: Check if default account also needs to be there
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: openebs-maya-operator
subjects:
- kind: ServiceAccountname: openebs-maya-operatornamespace: openebs
roleRef:kind: ClusterRolename: openebs-maya-operatorapiGroup: rbac.authorization.k8s.io
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:annotations:controller-gen.kubebuilder.io/version: v0.5.0creationTimestamp: nullname: blockdevices.openebs.io
spec:group: openebs.ionames:kind: BlockDevicelistKind: BlockDeviceListplural: blockdevicesshortNames:- bdsingular: blockdevicescope: Namespacedversions:- additionalPrinterColumns:- jsonPath: .spec.nodeAttributes.nodeNamename: NodeNametype: string- jsonPath: .spec.pathname: Pathpriority: 1type: string- jsonPath: .spec.filesystem.fsTypename: FSTypepriority: 1type: string- jsonPath: .spec.capacity.storagename: Sizetype: string- jsonPath: .status.claimStatename: ClaimStatetype: string- jsonPath: .status.statename: Statustype: string- jsonPath: .metadata.creationTimestampname: Agetype: datename: v1alpha1schema:openAPIV3Schema:description: BlockDevice is the Schema for the blockdevices APIproperties:apiVersion:description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'type: stringkind:description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'type: stringmetadata:type: objectspec:description: DeviceSpec defines the properties and runtime status of a BlockDeviceproperties:aggregateDevice:description: AggregateDevice was intended to store the hierarchical information in cases of LVM. However this is currently not implemented and may need to be re-looked into for better design. To be deprecatedtype: stringcapacity:description: Capacityproperties:logicalSectorSize:description: LogicalSectorSize is blockdevice logical-sector size in bytesformat: int32type: integerphysicalSectorSize:description: PhysicalSectorSize is blockdevice physical-Sector size in bytesformat: int32type: integerstorage:description: Storage is the blockdevice capacity in bytesformat: int64type: integerrequired:- storagetype: objectclaimRef:description: ClaimRef is the reference to the BDC which has claimed this BDproperties:apiVersion:description: API version of the referent.type: stringfieldPath:description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.'type: stringkind:description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'type: stringname:description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'type: stringnamespace:description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'type: stringresourceVersion:description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'type: stringuid:description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'type: stringtype: objectdetails:description: Details contain static attributes of BD like model,serial, and so forthproperties:compliance:description: Compliance is standards/specifications version implemented by device firmware  such as SPC-1, SPC-2, etctype: stringdeviceType:description: DeviceType represents the type of device like sparse, disk, partition, lvm, cryptenum:- disk- partition- sparse- loop- lvm- crypt- dm- mpathtype: stringdriveType:description: DriveType is the type of backing drive, HDD/SSDenum:- HDD- SSD- Unknown- ""type: stringfirmwareRevision:description: FirmwareRevision is the disk firmware revisiontype: stringhardwareSectorSize:description: HardwareSectorSize is the hardware sector size in bytesformat: int32type: integerlogicalBlockSize:description: LogicalBlockSize is the logical block size in bytes reported by /sys/class/block/sda/queue/logical_block_sizeformat: int32type: integermodel:description: Model is model of disktype: stringphysicalBlockSize:description: PhysicalBlockSize is the physical block size in bytes reported by /sys/class/block/sda/queue/physical_block_sizeformat: int32type: integerserial:description: Serial is serial number of disktype: stringvendor:description: Vendor is vendor of disktype: stringtype: objectdevlinks:description: DevLinks contains soft links of a block device like /dev/by-id/... /dev/by-uuid/...items:description: DeviceDevLink holds the mapping between type and links like by-id type or by-path type linkproperties:kind:description: Kind is the type of link like by-id or by-path.enum:- by-id- by-pathtype: stringlinks:description: Links are the soft linksitems:type: stringtype: arraytype: objecttype: arrayfilesystem:description: FileSystem contains mountpoint and filesystem typeproperties:fsType:description: Type represents the FileSystem type of the block devicetype: stringmountPoint:description: MountPoint represents the mountpoint of the block device.type: stringtype: objectnodeAttributes:description: NodeAttributes has the details of the node on which BD is attachedproperties:nodeName:description: NodeName is the name of the Kubernetes node resource on which the device is attachedtype: stringtype: objectparentDevice:description: "ParentDevice was intended to store the UUID of the parent Block Device as is the case for partitioned block devices. \n For example: /dev/sda is the parent for /dev/sda1 To be deprecated"type: stringpartitioned:description: Partitioned represents if BlockDevice has partitions or not (Yes/No) Currently always default to No. To be deprecatedenum:- "Yes"- "No"type: stringpath:description: Path contain devpath (e.g. /dev/sdb)type: stringrequired:- capacity- devlinks- nodeAttributes- pathtype: objectstatus:description: DeviceStatus defines the observed state of BlockDeviceproperties:claimState:description: ClaimState represents the claim state of the block deviceenum:- Claimed- Unclaimed- Releasedtype: stringstate:description: State is the current state of the blockdevice (Active/Inactive/Unknown)enum:- Active- Inactive- Unknowntype: stringrequired:- claimState- statetype: objecttype: objectserved: truestorage: truesubresources: {}
status:acceptedNames:kind: ""plural: ""conditions: []storedVersions: []---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:annotations:controller-gen.kubebuilder.io/version: v0.5.0creationTimestamp: nullname: blockdeviceclaims.openebs.io
spec:group: openebs.ionames:kind: BlockDeviceClaimlistKind: BlockDeviceClaimListplural: blockdeviceclaimsshortNames:- bdcsingular: blockdeviceclaimscope: Namespacedversions:- additionalPrinterColumns:- jsonPath: .spec.blockDeviceNamename: BlockDeviceNametype: string- jsonPath: .status.phasename: Phasetype: string- jsonPath: .metadata.creationTimestampname: Agetype: datename: v1alpha1schema:openAPIV3Schema:description: BlockDeviceClaim is the Schema for the blockdeviceclaims APIproperties:apiVersion:description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'type: stringkind:description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'type: stringmetadata:type: objectspec:description: DeviceClaimSpec defines the request details for a BlockDeviceproperties:blockDeviceName:description: BlockDeviceName is the reference to the block-device backing this claimtype: stringblockDeviceNodeAttributes:description: BlockDeviceNodeAttributes is the attributes on the node from which a BD should be selected for this claim. It can include nodename, failure domain etc.properties:hostName:description: HostName represents the hostname of the Kubernetes node resource where the BD should be presenttype: stringnodeName:description: NodeName represents the name of the Kubernetes node resource where the BD should be presenttype: stringtype: objectdeviceClaimDetails:description: Details of the device to be claimedproperties:allowPartition:description: AllowPartition represents whether to claim a full block device or a device that is a partitiontype: booleanblockVolumeMode:description: 'BlockVolumeMode represents whether to claim a device in Block mode or Filesystem mode. These are use cases of BlockVolumeMode: 1) Not specified: VolumeMode check will not be effective 2) VolumeModeBlock: BD should not have any filesystem or mountpoint 3) VolumeModeFileSystem: BD should have a filesystem and mountpoint. If DeviceFormat is    specified then the format should match with the FSType in BD'type: stringformatType:description: Format of the device required, eg:ext4, xfstype: stringtype: objectdeviceType:description: DeviceType represents the type of drive like SSD, HDD etc.,nullable: truetype: stringhostName:description: Node name from where blockdevice has to be claimed. To be deprecated. Use NodeAttributes.HostName insteadtype: stringresources:description: Resources will help with placing claims on Capacity, IOPSproperties:requests:additionalProperties:anyOf:- type: integer- type: stringpattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$x-kubernetes-int-or-string: truedescription: 'Requests describes the minimum resources required. eg: if storage resource of 10G is requested minimum capacity of 10G should be available TODO for validating'type: objectrequired:- requeststype: objectselector:description: Selector is used to find block devices to be considered for claimingproperties:matchExpressions:description: matchExpressions is a list of label selector requirements. The requirements are ANDed.items:description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.properties:key:description: key is the label key that the selector applies to.type: stringoperator:description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.type: stringvalues:description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.items:type: stringtype: arrayrequired:- key- operatortype: objecttype: arraymatchLabels:additionalProperties:type: stringdescription: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.type: objecttype: objecttype: objectstatus:description: DeviceClaimStatus defines the observed state of BlockDeviceClaimproperties:phase:description: Phase represents the current phase of the claimtype: stringrequired:- phasetype: objecttype: objectserved: truestorage: truesubresources: {}
status:acceptedNames:kind: ""plural: ""conditions: []storedVersions: []
---
# This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters
apiVersion: v1
kind: ConfigMap
metadata:name: openebs-ndm-confignamespace: openebslabels:openebs.io/component-name: ndm-config
data:# udev-probe is default or primary probe it should be enabled to run ndm# filterconfigs contains configs of filters. To provide a group of include# and exclude values add it as , separated stringnode-disk-manager.config: |probeconfigs:- key: udev-probename: udev probestate: true- key: seachest-probename: seachest probestate: false- key: smart-probename: smart probestate: truefilterconfigs:- key: os-disk-exclude-filtername: os disk exclude filterstate: trueexclude: "/,/etc/hosts,/boot"- key: vendor-filtername: vendor filterstate: trueinclude: ""exclude: "CLOUDBYT,OpenEBS"- key: path-filtername: path filterstate: trueinclude: ""exclude: "/dev/loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/md,/dev/dm-,/dev/rbd,/dev/zd"# metconfig can be used to decorate the block device with different types of labels# that are available on the node or come in a device properties.# node labels - the node where bd is discovered. A whitlisted label prefixes# attribute labels - a property of the BD can be added as a ndm label as ndm.io/<property>=<property-value>metaconfigs:- key: node-labelsname: node labelspattern: ""- key: device-labelsname: device labelstype: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: openebs-ndmnamespace: openebslabels:name: openebs-ndmopenebs.io/component-name: ndmopenebs.io/version: 3.5.0
spec:selector:matchLabels:name: openebs-ndmopenebs.io/component-name: ndmupdateStrategy:type: RollingUpdatetemplate:metadata:labels:name: openebs-ndmopenebs.io/component-name: ndmopenebs.io/version: 3.5.0spec:# By default the node-disk-manager will be run on all kubernetes nodes# If you would like to limit this to only some nodes, say the nodes# that have storage attached, you could label those node and use# nodeSelector.## e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node"# kubectl label node <node-name> "openebs.io/nodegroup"="storage-node"#nodeSelector:#  "openebs.io/nodegroup": "storage-node"serviceAccountName: openebs-maya-operatorhostNetwork: true# host PID is used to check status of iSCSI Service when the NDM# API service is enabled#hostPID: truecontainers:- name: node-disk-managerimage: openebs/node-disk-manager:2.1.0args:- -v=4# The feature-gate is used to enable the new UUID algorithm.- --feature-gates="GPTBasedUUID"# Use partition table UUID instead of create single partition to get# partition UUID. Require `GPTBasedUUID` to be enabled with.# - --feature-gates="PartitionTableUUID"# Detect changes to device size, filesystem and mount-points without restart.# - --feature-gates="ChangeDetection"# The feature gate is used to start the gRPC API service. The gRPC server# starts at 9115 port by default. This feature is currently in Alpha state# - --feature-gates="APIService"# The feature gate is used to enable NDM, to create blockdevice resources# for unused partitions on the OS disk# - --feature-gates="UseOSDisk"imagePullPolicy: IfNotPresentsecurityContext:privileged: truevolumeMounts:- name: configmountPath: /host/node-disk-manager.configsubPath: node-disk-manager.configreadOnly: true# make udev database available inside container- name: udevmountPath: /run/udev- name: procmountmountPath: /host/procreadOnly: true- name: devmountmountPath: /dev- name: basepathmountPath: /var/openebs/ndm- name: sparsepathmountPath: /var/openebs/sparseenv:# namespace in which NDM is installed will be passed to NDM Daemonset# as environment variable- name: NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# pass hostname as env variable using downward API to the NDM container- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# specify the directory where the sparse files need to be created.# if not specified, then sparse files will not be created.- name: SPARSE_FILE_DIRvalue: "/var/openebs/sparse"# Size(bytes) of the sparse file to be created.- name: SPARSE_FILE_SIZEvalue: "10737418240"# Specify the number of sparse files to be created- name: SPARSE_FILE_COUNTvalue: "0"livenessProbe:exec:command:- pgrep- "ndm"initialDelaySeconds: 30periodSeconds: 60volumes:- name: configconfigMap:name: openebs-ndm-config- name: udevhostPath:path: /run/udevtype: Directory# mount /proc (to access mount file of process 1 of host) inside container# to read mount-point of disks and partitions- name: procmounthostPath:path: /proctype: Directory- name: devmount# the /dev directory is mounted so that we have access to the devices that# are connected at runtime of the pod.hostPath:path: /devtype: Directory- name: basepathhostPath:path: /var/openebs/ndmtype: DirectoryOrCreate- name: sparsepathhostPath:path: /var/openebs/sparse
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-ndm-operatornamespace: openebslabels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatoropenebs.io/version: 3.5.0
spec:selector:matchLabels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatorreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatoropenebs.io/version: 3.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: node-disk-operatorimage: openebs/node-disk-operator:2.1.0imagePullPolicy: IfNotPresentenv:- name: WATCH_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name# the service account of the ndm-operator pod- name: SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName- name: OPERATOR_NAMEvalue: "node-disk-operator"- name: CLEANUP_JOB_IMAGEvalue: "openebs/linux-utils:3.5.0"# OPENEBS_IO_IMAGE_PULL_SECRETS environment variable is used to pass the image pull secrets# to the cleanup pod launched by NDM operator#- name: OPENEBS_IO_IMAGE_PULL_SECRETS#  value: ""livenessProbe:httpGet:path: /healthzport: 8585initialDelaySeconds: 15periodSeconds: 20readinessProbe:httpGet:path: /readyzport: 8585initialDelaySeconds: 5periodSeconds: 10
---
# Create NDM cluster exporter deployment.
# This is an optional component and is not required for the basic
# functioning of NDM
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-ndm-cluster-exporternamespace: openebslabels:name: openebs-ndm-cluster-exporteropenebs.io/component-name: ndm-cluster-exporteropenebs.io/version: 3.5.0
spec:replicas: 1strategy:type: Recreateselector:matchLabels:name: openebs-ndm-cluster-exporteropenebs.io/component-name: ndm-cluster-exportertemplate:metadata:labels:name: openebs-ndm-cluster-exporteropenebs.io/component-name: ndm-cluster-exporteropenebs.io/version: 3.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: ndm-cluster-exporterimage: openebs/node-disk-exporter:2.1.0command:- /usr/local/bin/exporterargs:- "start"- "--mode=cluster"- "--port=$(METRICS_LISTEN_PORT)"- "--metrics=/metrics"ports:- containerPort: 9100protocol: TCPname: metricsimagePullPolicy: IfNotPresentenv:- name: NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: METRICS_LISTEN_PORTvalue: :9100
---
# Create NDM cluster exporter service
# This is optional and required only when
# ndm-cluster-exporter deployment is used
apiVersion: v1
kind: Service
metadata:name: openebs-ndm-cluster-exporter-servicenamespace: openebslabels:name: openebs-ndm-cluster-exporter-serviceopenebs.io/component-name: ndm-cluster-exporterapp: openebs-ndm-exporter
spec:clusterIP: Noneports:- name: metricsport: 9100targetPort: 9100selector:name: openebs-ndm-cluster-exporter
---
# Create NDM node exporter daemonset.
# This is an optional component used for getting disk level
# metrics from each of the storage nodes
apiVersion: apps/v1
kind: DaemonSet
metadata:name: openebs-ndm-node-exporternamespace: openebslabels:name: openebs-ndm-node-exporteropenebs.io/component-name: ndm-node-exporteropenebs.io/version: 3.5.0
spec:updateStrategy:type: RollingUpdateselector:matchLabels:name: openebs-ndm-node-exporteropenebs.io/component-name: ndm-node-exportertemplate:metadata:labels:name: openebs-ndm-node-exporteropenebs.io/component-name: ndm-node-exporteropenebs.io/version: 3.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: node-disk-exporterimage: openebs/node-disk-exporter:2.1.0command:- /usr/local/bin/exporterargs:- "start"- "--mode=node"- "--port=$(METRICS_LISTEN_PORT)"- "--metrics=/metrics"ports:- containerPort: 9101protocol: TCPname: metricsimagePullPolicy: IfNotPresentsecurityContext:privileged: trueenv:- name: NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: METRICS_LISTEN_PORTvalue: :9101
---
# Create NDM node exporter service
# This is optional and required only when
# ndm-node-exporter daemonset is used
apiVersion: v1
kind: Service
metadata:name: openebs-ndm-node-exporter-servicenamespace: openebslabels:name: openebs-ndm-node-exporteropenebs.io/component: openebs-ndm-node-exporterapp: openebs-ndm-exporter
spec:clusterIP: Noneports:- name: metricsport: 9101targetPort: 9101selector:name: openebs-ndm-node-exporter
---
apiVersion: apps/v1
kind: Deployment
metadata:name: openebs-localpv-provisionernamespace: openebslabels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisioneropenebs.io/version: 3.5.0
spec:selector:matchLabels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisionerreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisioneropenebs.io/version: 3.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: openebs-provisioner-hostpathimagePullPolicy: IfNotPresentimage: openebs/provisioner-localpv:3.4.0args:- "--bd-time-out=$(BDC_BD_BIND_RETRIES)"env:# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s# based on this address. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_K8S_MASTER#  value: "http://10.128.0.12:8080"# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s# based on this config. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_KUBE_CONFIG#  value: "/home/ubuntu/.kube/config"# This sets the number of times the provisioner should try # with a polling interval of 5 seconds, to get the Blockdevice# Name from a BlockDeviceClaim, before the BlockDeviceClaim# is deleted. E.g. 12 * 5 seconds = 60 seconds timeout- name: BDC_BD_BIND_RETRIESvalue: "12"- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as# environment variable- name: OPENEBS_SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName- name: OPENEBS_IO_ENABLE_ANALYTICSvalue: "true"- name: OPENEBS_IO_INSTALLER_TYPEvalue: "openebs-operator"- name: OPENEBS_IO_HELPER_IMAGEvalue: "openebs/linux-utils:3.5.0"- name: OPENEBS_IO_BASE_PATHvalue: "/var/openebs/local"# LEADER_ELECTION_ENABLED is used to enable/disable leader election. By default# leader election is enabled.#- name: LEADER_ELECTION_ENABLED#  value: "true"# OPENEBS_IO_IMAGE_PULL_SECRETS environment variable is used to pass the image pull secrets# to the helper pod launched by local-pv hostpath provisioner#- name: OPENEBS_IO_IMAGE_PULL_SECRETS#  value: ""# Process name used for matching is limited to the 15 characters# present in the pgrep output.# So fullname can't be used here with pgrep (>15 chars).A regular expression# that matches the entire command name has to specified.# Anchor `^` : matches any string that starts with `provisioner-loc`# `.*`: matches any string that has `provisioner-loc` followed by zero or more charlivenessProbe:exec:command:- sh- -c- test `pgrep -c "^provisioner-loc.*"` = 1initialDelaySeconds: 30periodSeconds: 60
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: openebs-hostpathannotations:openebs.io/cas-type: localcas.openebs.io/config: |#hostpath type will create a PV by # creating a sub-directory under the# BASEPATH provided below.- name: StorageTypevalue: "hostpath"#Specify the location (directory) where# where PV(volume) data will be saved. # A sub-directory with pv-name will be # created. When the volume is deleted, # the PV sub-directory will be deleted.#Default value is /var/openebs/local- name: BasePathvalue: "/var/openebs/local/"
provisioner: openebs.io/local
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: openebs-deviceannotations:openebs.io/cas-type: localcas.openebs.io/config: |#device type will create a PV by# issuing a BDC and will extract the path# values from the associated BD.- name: StorageTypevalue: "device"
provisioner: openebs.io/local
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---

5、default-storage-class.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:name: localannotations:cas.openebs.io/config: |- name: StorageTypevalue: "hostpath"- name: BasePathvalue: "/var/openebs/local/"kubectl.kubernetes.io/last-applied-configuration: >{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"cas.openebs.io/config":"-name: StorageType\n  value: \"hostpath\"\n- name: BasePath\n  value:\"/var/openebs/local/\"\n","openebs.io/cas-type":"local","storageclass.beta.kubernetes.io/is-default-class":"true","storageclass.kubesphere.io/supported-access-modes":"[\"ReadWriteOnce\"]"},"name":"local"},"provisioner":"openebs.io/local","reclaimPolicy":"Delete","volumeBindingMode":"WaitForFirstConsumer"}openebs.io/cas-type: localstorageclass.beta.kubernetes.io/is-default-class: 'true'storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce"]'
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

6、修改镜像名称

docker pull v5cn/prometheus-adapter:v0.9.1
docker pull easzlab/kube-state-metrics:v2.5.0docker tag v5cn/prometheus-adapter:v0.9.1 k8s.gcr.io/prometheus-adapter/prometheus-adapter:v0.9.1
docker tag  easzlab/kube-state-metrics:v2.5.0  k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.5.0

六、参考 

1、https://blog.csdn.net/qiliang1033/article/details/132826339

2、http://www.bilibili.com/video/BV1MT411x7GH/


http://www.ppmy.cn/server/120297.html

相关文章

【JAVA开源】基于Vue和SpringBoot的企业客户管理系统

本文项目编号 T 036 &#xff0c;文末自助获取源码 \color{red}{T036&#xff0c;文末自助获取源码} T036&#xff0c;文末自助获取源码 目录 一、系统介绍1.1 管理员角色1.2 普通员工角色1.3 系统特点 二、演示录屏三、启动教程四、功能截图五、文案资料5.1 选题背景5.2 国内…

算法练习题27——疫情下的电影院(模拟)

其实思路还好 就是输入有点难搞 Java import java.util.ArrayList; import java.util.Scanner;public class Main {public static void main(String[] args) {Scanner scanner new Scanner(System.in);String input scanner.nextLine();// 去掉输入字符串的方括号if (input.…

2024年中国研究生数学建模竞赛C题——解题思路

2024年中国研究生数学建模竞赛C题——解题思路 数据驱动下磁性元件的磁芯损耗建模——解决思路 二、问题描述 为解决磁性元件磁芯材料损耗精确计算问题&#xff0c;通过实测磁性元件在给定工况&#xff08;不同温度、频率、磁通密度&#xff09;下磁芯材料损耗的数据&#xf…

14、主机、应用及数据安全解读

数据来源&#xff1a;14.主机、应用及数据安全解读_哔哩哔哩_bilibili

MySQL中去除重复

除去相同的行 SELECT DISTINCT 列名 FROM 表名; 示例&#xff1a;查询employees表&#xff0c;显示唯一的部门ID select distinct department_id from employees;

python爬虫初体验(二)

在Python中&#xff0c;每个模块都有一个内置的变量 name&#xff0c;用于表示当前模块的名称。当一个Python文件被执行时&#xff0c;Python解释器会首先将该文件作为一个模块导入&#xff0c;并执行其中的代码。此时&#xff0c;__name__的值为模块的名称。 作用 模块可被导…

Electron应用程序打包后运行报错cannot find module ‘@vue/cli-service‘

本项目打包运行后报错问题的解决办法&#xff0c;类似于其他cannot find module XXX’的报错&#xff0c;也基本可以解决 文章目录 electron应用程序打包后运行报错排查问题解决办法 electron应用程序打包后运行报错 错误如下&#xff1a; 提示找不到该模块 排查问题 本项…

山体滑坡检测系统源码分享

山体滑坡检测检测系统源码分享 [一条龙教学YOLOV8标注好的数据集一键训练_70全套改进创新点发刊_Web前端展示] 1.研究背景与意义 项目参考AAAI Association for the Advancement of Artificial Intelligence 项目来源AACV Association for the Advancement of Computer Vis…