【K8S】K8S 1.18.2安装dashboard(基于kubernetes-dashboard 2.0.0版本)

news/2024/10/22 17:17:35/

写在前面

K8S集群部署成功了,如何对集群进行可视化管理呢?别着急,接下来,我们一起搭建kubernetes-dashboard来解决这个问题。

有关K8S集群的安装可以参考《【K8S】基于单Master节点安装K8S集群》

有关Metrics-Service的安装可以参考《【K8S】K8s部署Metrics-Server服务》

安装部署dashboard

1.查看pod运行情况

[root@binghe101 ~]# kubectl get pods -A  -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE    IP                NODE        NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-5b8b769fcd-l2tmm   1/1     Running   2          15h    172.18.203.71     binghe101   <none>           <none>
kube-system   calico-node-7b7fx                          1/1     Running   2          15h    192.168.175.102   binghe102   <none>           <none>
kube-system   calico-node-8krsl                          1/1     Running   2          15h    192.168.175.101   binghe101   <none>           <none>
kube-system   coredns-546565776c-rd2zr                   1/1     Running   2          15h    172.18.203.72     binghe101   <none>           <none>
kube-system   coredns-546565776c-x8r7l                   1/1     Running   2          15h    172.18.203.73     binghe101   <none>           <none>
kube-system   etcd-binghe101                             1/1     Running   2          15h    192.168.175.101   binghe101   <none>           <none>
kube-system   kube-apiserver-binghe101                   1/1     Running   3          15h    192.168.175.101   binghe101   <none>           <none>
kube-system   kube-controller-manager-binghe101          1/1     Running   3          15h    192.168.175.101   binghe101   <none>           <none>
kube-system   kube-proxy-cgq5n                           1/1     Running   2          15h    192.168.175.102   binghe102   <none>           <none>
kube-system   kube-proxy-qnffb                           1/1     Running   2          15h    192.168.175.101   binghe101   <none>           <none>
kube-system   kube-scheduler-binghe101                   1/1     Running   3          15h    192.168.175.101   binghe101   <none>           <none>
kube-system   metrics-server-57bc7f4584-cwsn8            1/1     Running   0          109m   172.18.229.68     binghe102   <none>           <none>

2.下载recommended.yaml文件

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

3.修改recommended.yaml文件

vim recommended.yaml

需要修改的内容如下所示。

---
kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:type: NodePort #增加ports:- port: 443targetPort: 8443nodePort: 30000 #增加selector:k8s-app: kubernetes-dashboard
---
#因为自动生成的证书很多浏览器无法使用,所以我们自己创建,注释掉kubernetes-dashboard-certs对象声明
#apiVersion: v1
#kind: Secret
#metadata:
#  labels:
#    k8s-app: kubernetes-dashboard
#  name: kubernetes-dashboard-certs
#  namespace: kubernetes-dashboard
#type: Opaque
---

4.创建证书

mkdir dashboard-certscd dashboard-certs/#创建命名空间
kubectl create namespace kubernetes-dashboard# 创建key文件
openssl genrsa -out dashboard.key 2048#证书请求
openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'#自签证书
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt#创建kubernetes-dashboard-certs对象
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

5.安装dashboard

kubectl create -f ~/recommended.yaml 

注意:这里可能会报如下所示。

Error from server (AlreadyExists): error when creating "./recommended.yaml": namespaces "kubernetes-dashboard" already exists

这是因为我们在创建证书时,已经创建了kubernetes-dashboard命名空间,所以,直接忽略此错误信息即可。

6.查看安装结果

[root@binghe101 ~]# kubectl get pods -A  -o wide
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE    IP                NODE        NOMINATED NODE   READINESS GATES
kube-system            calico-kube-controllers-5b8b769fcd-l2tmm     1/1     Running   2          15h    172.18.203.71     binghe101   <none>           <none>
kube-system            calico-node-7b7fx                            1/1     Running   2          15h    192.168.175.102   binghe102   <none>           <none>
kube-system            calico-node-8krsl                            1/1     Running   2          15h    192.168.175.101   binghe101   <none>           <none>
kube-system            coredns-546565776c-rd2zr                     1/1     Running   2          15h    172.18.203.72     binghe101   <none>           <none>
kube-system            coredns-546565776c-x8r7l                     1/1     Running   2          15h    172.18.203.73     binghe101   <none>           <none>
kube-system            etcd-binghe101                               1/1     Running   2          15h    192.168.175.101   binghe101   <none>           <none>
kube-system            kube-apiserver-binghe101                     1/1     Running   3          15h    192.168.175.101   binghe101   <none>           <none>
kube-system            kube-controller-manager-binghe101            1/1     Running   3          15h    192.168.175.101   binghe101   <none>           <none>
kube-system            kube-proxy-cgq5n                             1/1     Running   2          15h    192.168.175.102   binghe102   <none>           <none>
kube-system            kube-proxy-qnffb                             1/1     Running   2          15h    192.168.175.101   binghe101   <none>           <none>
kube-system            kube-scheduler-binghe101                     1/1     Running   3          15h    192.168.175.101   binghe101   <none>           <none>
kube-system            metrics-server-57bc7f4584-cwsn8              1/1     Running   0          133m   172.18.229.68     binghe102   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-qccwt   1/1     Running   0          102s   172.18.229.75     binghe102   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-7b544877d5-s8cgd        1/1     Running   0          102s   172.18.229.74     binghe102   <none>           <none>
[root@binghe101 ~]# kubectl get service -n kubernetes-dashboard  -o wide
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE     SELECTOR
dashboard-metrics-scraper   ClusterIP   10.96.249.138   <none>        8000/TCP        2m21s   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard        NodePort    10.96.219.128   <none>        443:30000/TCP   2m21s   k8s-app=kubernetes-dashboard

7.创建dashboard管理员

创建dashboard-admin.yaml文件。

vim dashboard-admin.yaml

文件的内容如下所示。

apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: dashboard-adminnamespace: kubernetes-dashboard

保存退出后执行如下命令创建管理员。

kubectl create -f ./dashboard-admin.yaml

8.为用户分配权限

创建dashboard-admin-bind-cluster-role.yaml文件。

vim dashboard-admin-bind-cluster-role.yaml

文件内容如下所示。

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: dashboard-admin-bind-cluster-rolelabels:k8s-app: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: dashboard-adminnamespace: kubernetes-dashboard

保存退出后执行如下命令为用户分配权限。

kubectl create -f ./dashboard-admin-bind-cluster-role.yaml

9.查看并复制用户Token

在命令行执行如下命令。

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')

具体执行情况如下所示。

[root@binghe101 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name:         dashboard-admin-token-p8tng
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-adminkubernetes.io/service-account.uid: c3640b5f-cd92-468c-ba01-c886290c41caType:  kubernetes.io/service-account-tokenData
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlVsRVBqTG5RNC1oTlpDS2xMRXF2cFIxWm44ZXhWeXlBRG5SdXpmQXpDdWcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcDh0bmciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYzM2NDBiNWYtY2Q5Mi00NjhjLWJhMDEtYzg4NjI5MGM0MWNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.XOrXofgbk5EDa8COxOkv31mYwciUGXcBD9TQrb6QTOfT2W4eEpAAZUzKYzSmxLeHMqvu_IUIUF2mU5Lt6wN3L93C2NLfV9jqaopfq0Q5GjgWNgGRZAgsuz5W3v_ntlKz0_VW3a7ix3QQSrEWLBF6YUPrzl8p3r8OVWpDUndjx-OXEw5pcYQLH1edy-tpQ6Bc8S1BnK-d4Zf-ZuBeH0X6orZKhdSWhj9WQDJUx6DBpjx9DUc9XecJY440HVti5hmaGyfd8v0ofgtdsSE7q1iizm-MffJpcp4PGnUU3hy1J-XIP0M-8SpAyg2Pu_-mQvFfoMxIPEEzpOrckfC1grlZ3g

可以看到,此时的Token值为:

eyJhbGciOiJSUzI1NiIsImtpZCI6IlVsRVBqTG5RNC1oTlpDS2xMRXF2cFIxWm44ZXhWeXlBRG5SdXpmQXpDdWcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcDh0bmciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYzM2NDBiNWYtY2Q5Mi00NjhjLWJhMDEtYzg4NjI5MGM0MWNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.XOrXofgbk5EDa8COxOkv31mYwciUGXcBD9TQrb6QTOfT2W4eEpAAZUzKYzSmxLeHMqvu_IUIUF2mU5Lt6wN3L93C2NLfV9jqaopfq0Q5GjgWNgGRZAgsuz5W3v_ntlKz0_VW3a7ix3QQSrEWLBF6YUPrzl8p3r8OVWpDUndjx-OXEw5pcYQLH1edy-tpQ6Bc8S1BnK-d4Zf-ZuBeH0X6orZKhdSWhj9WQDJUx6DBpjx9DUc9XecJY440HVti5hmaGyfd8v0ofgtdsSE7q1iizm-MffJpcp4PGnUU3hy1J-XIP0M-8SpAyg2Pu_-mQvFfoMxIPEEzpOrckfC1grlZ3g

查看dashboard界面

在浏览器中打开链接 https://192.168.175.101:30000 ,如下所示。

在这里插入图片描述

这里,我们选择Token方式登录,并输入在命令行获取到的Token,如下所示。

在这里插入图片描述

点击登录后进入dashboard,如下所示。

在这里插入图片描述

由于我们在《【K8S】K8s部署Metrics-Server服务》一文中安装了Metrics-Server服务,所以,我们可以查看节点服务器CPU和内存的使用情况,如下所示。

在这里插入图片描述

至此,dashboard 2.0.0安装成功。

写在最后

如果觉得文章对你有点帮助,请微信搜索并关注「 冰河技术 」微信公众号,跟冰河学习各种编程技术。

最后附上K8S最全知识图谱链接:

https://www.processon.com/view/link/5ac64532e4b00dc8a02f05eb?spm=a2c4e.10696291.0.0.6ec019a4bYSFIw#map

祝大家在学习K8S时,少走弯路。


http://www.ppmy.cn/news/174853.html

相关文章

RL策略梯度方法之(八): Distributed Distributional DDPG (D4PG)

本专栏按照 https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html 顺序进行总结 。 文章目录 原理解析主要 trick与 DDPG的异同 算法实现总体流程代码实现 D 4 P G \color{red}D4PG D4PG &#xff1a;[ paper | code ] 原理解析 主要 trick D4P…

K8S使用ceph-csi持久化存储之cephfs部署验证快照

一、集群和组件版本 K8S集群&#xff1a;1.19.4 Ceph集群&#xff1a;ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable) Ceph-CSI&#xff1a;release-v3.3 (csi版本要对应不然创建pvc要报错) Linue kernel&#xff1a;5.13.12-1.el7.elrep…

k8s 初体验

1.使用kubeadm工具快速安装Kubernetes集群 准备工作 关掉selinux vi /etc/selinux/config disabled 关掉firewalld,iptables systemctl disable firewalld systemctl stop firewalld systemctl disable iptables systemctl stop iptables 编辑生成kubernetes的yum源 [roo…

Kubernetes - 从Docker 镜像到K8s Pod操作示例

上一篇有写到怎样将一个普通的springboot的jar包&#xff0c;制作成docker镜像&#xff0c;并push到镜像仓库&#xff0c; Docker - 创建并运行一个docker&#xff08;springboot&#xff09;容器 启动这个docker容器&#xff0c;并对外提供服务等等&#xff1b; 这里我们…

离线强化学习(Offline RL)系列2: (环境篇)D4RL数据集简介、安装及错误解决

【更新日志】 Update: 2022年3月14日&#xff0c;增加D4RL安装过程报错问题。. 强化学习快速发展的主要原因在于有一个良好的模拟环境&#xff0c;最终得到一个最优的policy, 然而现实问题就是在实际落地应用中没有有效的环境&#xff0c;为了解决实验环境问题&#xff0c;本文…

CISSP-D4-通讯与网络安全

CISSP-D1-安全与风险管理 CISSP-D2-资产安全 CISSP-D3-安全架构与工程 CISSP-D5-身份与访问控制 D4&#xff1a;通讯与网络安全 一、网络模型安全概述&#xff1a; D4-1&#xff5e;2 二、网络组建和设备安全&#xff1a; D4-3&#xff5e;5 三、网络连接安全&#xff1a; …

Kubernetes基础:查看状态、管理服务

转载自https://www.cnblogs.com/cocowool/p/k8s_describe_node_pod_and_service.html 目标 了解Kubernetes Pod了解Kubernetes Node学习如何调试部署问题了解如何通过Service暴露应用 Kubernetes Pods 在Kubernetes中创建一个Deployment 部署就会在Node上创建一个Pod&#x…

k8s问题记录与解决

一、问题&#xff1a;error: open /var/lib/kubelet/config.yaml: no such file or directory 解决&#xff1a;关键文件缺失&#xff0c;多发生于没有做 kubeadm init就运行了systemctl start kubelet。 要先成功运行kubeadm init 二、kubelet.service has more than one E…