基于Kubeeasy安装Kubernetes-v1.22.1版本(安装报错已解决)

news/2024/9/24 13:16:56/

基础环境准备

将提供的安装包 chinaskills_cloud_paas_v2.0.2.iso 上传至 master 节点 /root 目录,并解压 到 /opt 目录:

[root@localhost ~]# ll
total 7446736
-rw-------. 1 root root       1579 Mar  7 22:46 anaconda-ks.cfg
-rw-r--r--. 1 root root 4712300544 Jun  7  2022 CentOS-7-x86_64-DVD-2009.iso
-rw-r--r--. 1 root root 2913150976 Jun 20  2022 chinaskills_cloud_paas_v2.0.2.iso

创建目录存放yum源

[root@localhost ~]# mkdir /opt/centos

挂载镜像

[root@localhost ~]# mount chinaskills_cloud_paas_v2.0.2.iso /mnt/
mount: /dev/loop0 is write-protected, mounting read-only
[root@localhost ~]# cp -rf /mnt/* /opt/
[root@localhost ~]# ll /opt/
total 1964852
drwxr-xr-x. 2 root root          6 May 27 19:33 centos
dr-xr-xr-x. 2 root root         55 May 27 19:35 dependencies
dr-xr-xr-x. 2 root root        181 May 27 19:35 extended-images
-r-xr-xr-x. 1 root root  615853450 May 27 19:35 harbor-offline.tar.gz
-r-xr-xr-x. 1 root root   13862382 May 27 19:35 helm-v3.7.1-linux-amd64.tar.gz
-r-xr-xr-x. 1 root root   21963365 May 27 19:35 istio.tar.gz
-r-xr-xr-x. 1 root root     143832 May 27 19:35 kubeeasy
-r-xr-xr-x. 1 root root 1339977057 May 27 19:35 kubernetes.tar.gz
-r-xr-xr-x. 1 root root   20196005 May 27 19:35 kubevirt.tar.gz

安装 kubeeasy

kubeeasy 为 Kubernetes 集群专业部署工具,极大的简化了部署流程。其特性如下:

全自动化安装流程;

支持 DNS 识别集群;

支持自我修复:一切都在自动扩缩组中运行;

支持多种操作系统(如 Debian、Ubuntu 16.04、CentOS7、RHEL 等);

支持高可用;

在 master 节点安装 kubeeasy 工具:

[root@localhost ~]# mv /opt/kubeeasy /usr/bin/

安装依赖包

此步骤主要完成 docker-ce、git、unzip、vim、wget 等工具的安装。

在 master 节点执行以下命令完成依赖包的安装:

[root@localhost ~]# kubeeasy install depend \
--host 192.168.169.10,192.168.169.20 \
--user root \
--password 000000 \
--offline-file /opt/dependencies/base-rpms.tar.gz 

以下是输出信息

[2024-05-27 21:26:07] INFO:    [start] bash kubeeasy install depend --host 192.168.169.10,192.168.169.20 --user root --password ****** --offline-file /opt/dependencies/base-rpms.tar.gz
[2024-05-27 21:26:07] INFO:    [offline] unzip offline dependencies package on local.
[2024-05-27 21:26:09] INFO:    [offline] unzip offline dependencies package succeeded.
[2024-05-27 21:26:09] INFO:    [install] install dependencies packages on local.
[2024-05-27 21:27:11] INFO:    [install] install dependencies packages succeeded.
[2024-05-27 21:27:16] INFO:    [offline] 192.168.169.10: load offline dependencies file
[2024-05-27 21:27:20] INFO:    [offline] load offline dependencies file to 192.168.169.10 succeeded.
[2024-05-27 21:27:20] INFO:    [install] 192.168.169.10: install dependencies packages
[2024-05-27 21:27:21] INFO:    [install] 192.168.169.10: install dependencies packages succeeded.
[2024-05-27 21:27:26] INFO:    [offline] 192.168.169.20: load offline dependencies file
[2024-05-27 21:27:35] INFO:    [offline] load offline dependencies file to 192.168.169.20 succeeded.
[2024-05-27 21:27:35] INFO:    [install] 192.168.169.20: install dependencies packages
[2024-05-27 21:29:03] INFO:    [install] 192.168.169.20: install dependencies packages succeeded.See detailed log >> /var/log/kubeinstall.log

配置 SSH 免密钥

安装 Kubernetes 集群的时候,需要配置 Kubernetes 集群各节点间的免密登录用于传输文件和通讯。

在 master 节点执行以下命令完成集群节点的连通性检测

[root@localhost ~]# kubeeasy check ssh \
--host 192.168.169.10,192.168.169.20 \
--user root \
--password 000000

以下是输出

[2024-05-27 21:29:49] INFO:    [start] bash kubeeasy check ssh --host 192.168.169.10,192.168.169.20 --user root --password ******
[2024-05-27 21:29:49] INFO:    [check] sshpass command exists.
[2024-05-27 21:29:49] INFO:    [check] ssh 192.168.169.10 connection succeeded.
[2024-05-27 21:29:49] INFO:    [check] ssh 192.168.169.20 connection succeeded.See detailed log >> /var/log/kubeinstall.log

生成密钥

[root@localhost ~]# kubeeasy create ssh-keygen \
--master 192.168.169.10 \
--worker 192.168.169.20 \
--user root \
--password 000000

以下是输出信息

[2024-05-27 21:31:56] INFO:    [start] bash kubeeasy create ssh-keygen --master 192.168.169.10 --worker 192.168.169.20 --user root --password ******
[2024-05-27 21:31:56] INFO:    [check] sshpass command exists.
[2024-05-27 21:31:57] INFO:    [check] ssh 192.168.169.10 connection succeeded.
[2024-05-27 21:31:57] INFO:    [check] ssh 192.168.169.20 connection succeeded.
[2024-05-27 21:31:58] INFO:    [create] create ssh keygen 192.168.169.10
[2024-05-27 21:31:58] INFO:    [create] create ssh keygen 192.168.169.10 succeeded.
[2024-05-27 21:31:59] INFO:    [create] create ssh keygen 192.168.169.20
[2024-05-27 21:31:59] INFO:    [create] create ssh keygen 192.168.169.20 succeeded.See detailed log >> /var/log/kubeinstall.log

部署 Kubernetes 集群

[root@localhost ~]# kubeeasy install k8s \
--master 192.168.169.10 \
--worker 192.168.169.20 \
--user root \
--password 000000 \
--version 1.22.1 \
--offline-file /opt/kubernetes.tar.gz 

以下是报错信息

[2024-05-27 21:34:16] INFO:    [start] bash kubeeasy install k8s --master 192.168.169.10 --worker 192.168.169.20 --user root --password ****** --version 1.22.1 --offline-file /opt/kubernetes.tar.gz
[2024-05-27 21:34:16] INFO:    [check] sshpass command exists.
[2024-05-27 21:34:16] INFO:    [check] rsync command exists.
[2024-05-27 21:34:17] INFO:    [check] ssh 192.168.169.10 connection succeeded.
[2024-05-27 21:34:17] INFO:    [check] ssh 192.168.169.20 connection succeeded.
[2024-05-27 21:34:17] INFO:    [offline] unzip offline package on local.
[2024-05-27 21:34:30] INFO:    [offline] unzip offline package succeeded.
[2024-05-27 21:34:30] INFO:    [offline] master 192.168.169.10: load offline file
[2024-05-27 21:34:31] INFO:    [offline] load offline file to 192.168.169.10 succeeded.
[2024-05-27 21:34:31] INFO:    [offline] master 192.168.169.10: disable the firewall
[2024-05-27 21:34:33] INFO:    [offline] 192.168.169.10: disable the firewall succeeded.
[2024-05-27 21:34:33] INFO:    [offline] worker 192.168.169.20: load offline file
[2024-05-27 21:35:32] INFO:    [offline] load offline file to 192.168.169.20 succeeded.
[2024-05-27 21:35:32] INFO:    [offline] worker 192.168.169.20: disable the firewall
[2024-05-27 21:35:34] INFO:    [offline] 192.168.169.20: disable the firewall succeeded.
[2024-05-27 21:35:34] INFO:    [get] Get 192.168.169.10 InternalIP.
[2024-05-27 21:35:35] INFO:    [result] get MGMT_NODE_IP value succeeded.
[2024-05-27 21:35:35] INFO:    [result] MGMT_NODE_IP is 192.168.169.10
[2024-05-27 21:35:35] INFO:    [init] master: 192.168.169.10
[2024-05-27 21:35:38] INFO:    [init] init master 192.168.169.10 succeeded.
[2024-05-27 21:35:38] INFO:    [init] master: 192.168.169.10 set hostname and hosts
[2024-05-27 21:35:38] INFO:    [init] 192.168.169.10 set hostname and hosts succeeded.
[2024-05-27 21:35:38] INFO:    [init] worker: 192.168.169.20
[2024-05-27 21:35:41] INFO:    [init] init worker 192.168.169.20 succeeded.
[2024-05-27 21:35:41] INFO:    [init] master: 192.168.169.20 set hostname and hosts
[2024-05-27 21:35:41] INFO:    [init] 192.168.169.20 set hostname and hosts succeeded.
[2024-05-27 21:35:41] INFO:    [install] install docker on 192.168.169.10.
[2024-05-27 21:35:42] ERROR:   [install] install docker on 192.168.169.10 failed.
[2024-05-27 21:35:42] INFO:    [install] install kube on 192.168.169.10
[2024-05-27 21:35:43] INFO:    [install] install kube on 192.168.169.10 succeeded.
[2024-05-27 21:35:43] INFO:    [install] install docker on 192.168.169.20.
[2024-05-27 21:35:44] ERROR:   [install] install docker on 192.168.169.20 failed.
[2024-05-27 21:35:44] INFO:    [install] install kube on 192.168.169.20
[2024-05-27 21:35:45] INFO:    [install] install kube on 192.168.169.20 succeeded.
[2024-05-27 21:35:45] INFO:    [kubeadm init] kubeadm init on 192.168.169.10
[2024-05-27 21:35:45] INFO:    [kubeadm init] 192.168.169.10: set kubeadm-config.yaml
[2024-05-27 21:35:45] INFO:    [kubeadm init] 192.168.169.10: set kubeadm-config.yaml succeeded.
[2024-05-27 21:35:45] INFO:    [kubeadm init] 192.168.169.10: kubeadm init start.
[2024-05-27 21:35:48] ERROR:   [kubeadm init] 192.168.169.10: kubeadm init failed.ERROR Summary: [2024-05-27 21:35:42] ERROR:   [install] install docker on 192.168.169.10 failed.[2024-05-27 21:35:44] ERROR:   [install] install docker on 192.168.169.20 failed.[2024-05-27 21:35:48] ERROR:   [kubeadm init] 192.168.169.10: kubeadm init failed.See detailed log >> /var/log/kubeinstall.log 

查看日志发现 DockerContainerd 服务没有启动

Server:
ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
errors pretty printing info
, error: exit status 1[ERROR Service-Docker]: docker service is not active, please run 'systemctl start docker.service'[ERROR SystemVerification]: error verifying Docker info: "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[2024-05-27 21:35:48] ERROR:   [kubeadm init] 192.168.169.10: kubeadm init failed.

没有安装成功

[root@localhost ~]# systemctl status docker
Unit docker.service could not be found.
[root@localhost ~]# systemctl status containerd
Unit containerd.service could not be found.

报错解决

# 配置yum源
[root@localhost ~]# mount CentOS-7-x86_64-DVD-2009.iso /mnt/
mount: /dev/loop1 is write-protected, mounting read-only
[root@localhost ~]# cp -rf /mnt/* /opt/centos/
[root@localhost ~]# ll /opt/centos/
total 328
-rw-r--r--. 1 root root     14 May 27 21:41 CentOS_BuildTag
drwxr-xr-x. 3 root root     35 May 27 21:41 EFI
-rw-r--r--. 1 root root    227 May 27 21:41 EULA
-rw-r--r--. 1 root root  18009 May 27 21:41 GPL
drwxr-xr-x. 3 root root     57 May 27 21:41 images
drwxr-xr-x. 2 root root    198 May 27 21:41 isolinux
drwxr-xr-x. 2 root root     43 May 27 21:41 LiveOS
drwxr-xr-x. 2 root root 225280 May 27 21:41 Packages
drwxr-xr-x. 2 root root   4096 May 27 21:41 repodata
-rw-r--r--. 1 root root   1690 May 27 21:41 RPM-GPG-KEY-CentOS-7
-rw-r--r--. 1 root root   1690 May 27 21:41 RPM-GPG-KEY-CentOS-Testing-7
-r--r--r--. 1 root root   2883 May 27 21:41 TRANS.TBL

编写本地yum源

[root@localhost ~]# cat > /etc/yum.repos.d/centos.repo << EOF
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
EOF

运行kubeeasy会清除yum源所以备份一下

[root@localhost ~]# cp /etc/yum.repos.d/centos.repo .

项目简介

Libseccomp 是一个开源库,它为 Linux 系统提供了一种强大的安全增强工具——Seccomp(Secure Computing)。该项目的目标是使应用能够限制自身或者特定进程的系统调用行为,从而减少攻击面,提高系统的安全性。

技术分析

Seccomp 是 Linux 内核提供的一个功能,允许程序设定一套规则,筛选出哪些系统调用可以被执行,哪些将被阻止。这种机制对于防止恶意代码和零日攻击非常有效。而 libseccomp 则是 Seccomp 的用户空间接口,提供了高级别的 API 和工具,使得开发者无需深入内核编程就能利用这一特性。

Libseccomp 提供了以下关键特性:

  1. 过滤器语法:基于 BPF(Berkeley Packet Filter)的规则定义,这是一种高效的、编译过的指令集,用于描述系统调用过滤策略。
  2. API 支持:提供了 C 库接口,方便集成到各种软件中,同时也支持 golang 和 python 绑定。
  3. 动态策略调整:程序运行时可动态修改过滤规则,适应不同场景下的安全需求。
  4. 兼容性:广泛支持多个 Linux 内核版本,包括老版本内核的回退机制。
[root@localhost ~]# yum install -y libseccomp

配置vsftpd服务

# vsftpd在kubeeasy运行时已经安装过了,防火墙也已经关闭过了
[root@localhost ~]# cat >> /etc/vsftpd/vsftpd.conf << EOF
anon_root=/opt/
EOF

重启vsftpd服务

[root@localhost ~]# systemctl restart vsftpd

worker节点配置yum源

[root@localhost ~]# cat > /etc/yum.repos.d/centos.repo << EOF
[centos]
name=centos
baseurl=ftp://192.168.169.10/centos
gpgcheck=0
EOF

安装依赖

[root@localhost ~]# yum install -y libseccomp

再次安装依赖包

[root@localhost ~]# kubeeasy install depend \
--host 192.168.169.10,192.168.169.20 \
--user root \
--password 000000 \
--offline-file /opt/dependencies/base-rpms.tar.gz

以下是输出信息

[2024-05-27 21:51:36] INFO:    [start] bash kubeeasy install depend --host 192.168.169.10,192.168.169.20 --user root --password ****** --offline-file /opt/dependencies/base-rpms.tar.gz
[2024-05-27 21:51:36] INFO:    [offline] unzip offline dependencies package on local.
[2024-05-27 21:51:38] INFO:    [offline] unzip offline dependencies package succeeded.
[2024-05-27 21:51:38] INFO:    [install] install dependencies packages on local.
[2024-05-27 21:52:07] INFO:    [install] install dependencies packages succeeded.
[2024-05-27 21:52:08] INFO:    [offline] 192.168.169.10: load offline dependencies file
[2024-05-27 21:52:12] INFO:    [offline] load offline dependencies file to 192.168.169.10 succeeded.
[2024-05-27 21:52:12] INFO:    [install] 192.168.169.10: install dependencies packages
[2024-05-27 21:52:12] INFO:    [install] 192.168.169.10: install dependencies packages succeeded.
[2024-05-27 21:52:13] INFO:    [offline] 192.168.169.20: load offline dependencies file
[2024-05-27 21:52:20] INFO:    [offline] load offline dependencies file to 192.168.169.20 succeeded.
[2024-05-27 21:52:20] INFO:    [install] 192.168.169.20: install dependencies packages
[2024-05-27 21:52:52] INFO:    [install] 192.168.169.20: install dependencies packages succeeded.

查看服务状态

[root@localhost ~]# systemctl status docker
● docker.service - Docker Application Container EngineLoaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)Active: inactive (dead)Docs: https://docs.docker.com
[root@localhost ~]# systemctl status containerd
● containerd.service - containerd container runtimeLoaded: loaded (/usr/lib/systemd/system/containerd.service; disabled; vendor preset: disabled)Active: inactive (dead)Docs: https://containerd.io

再次安装Kubernetes

[root@localhost ~]# kubeeasy install k8s \
--master 192.168.169.10 \
--worker 192.168.169.20 \
--user root \
--password 000000 \
--version 1.22.1 \
--offline-file /opt/kubernetes.tar.gz 

以下是输出信息报错成功解决

[2024-05-27 21:53:36] INFO:    [start] bash kubeeasy install k8s --master 192.168.169.10 --worker 192.168.169.20 --user root --password ****** --version 1.22.1 --offline-file /opt/kubernetes.tar.gz
[2024-05-27 21:53:36] INFO:    [check] sshpass command exists.
[2024-05-27 21:53:36] INFO:    [check] rsync command exists.
[2024-05-27 21:53:37] INFO:    [check] ssh 192.168.169.10 connection succeeded.
[2024-05-27 21:53:37] INFO:    [check] ssh 192.168.169.20 connection succeeded.
[2024-05-27 21:53:37] INFO:    [offline] unzip offline package on local.
[2024-05-27 21:53:48] INFO:    [offline] unzip offline package succeeded.
[2024-05-27 21:53:48] INFO:    [offline] master 192.168.169.10: load offline file
[2024-05-27 21:53:49] INFO:    [offline] load offline file to 192.168.169.10 succeeded.
[2024-05-27 21:53:49] INFO:    [offline] master 192.168.169.10: disable the firewall
[2024-05-27 21:53:49] INFO:    [offline] 192.168.169.10: disable the firewall succeeded.
[2024-05-27 21:53:49] INFO:    [offline] worker 192.168.169.20: load offline file
[2024-05-27 21:53:50] INFO:    [offline] load offline file to 192.168.169.20 succeeded.
[2024-05-27 21:53:50] INFO:    [offline] worker 192.168.169.20: disable the firewall
[2024-05-27 21:53:51] INFO:    [offline] 192.168.169.20: disable the firewall succeeded.
[2024-05-27 21:53:51] INFO:    [get] Get 192.168.169.10 InternalIP.
[2024-05-27 21:53:51] INFO:    [result] get MGMT_NODE_IP value succeeded.
[2024-05-27 21:53:51] INFO:    [result] MGMT_NODE_IP is 192.168.169.10
[2024-05-27 21:53:51] INFO:    [init] master: 192.168.169.10
[2024-05-27 21:53:53] INFO:    [init] init master 192.168.169.10 succeeded.
[2024-05-27 21:53:54] INFO:    [init] master: 192.168.169.10 set hostname and hosts
[2024-05-27 21:53:55] INFO:    [init] 192.168.169.10 set hostname and hosts succeeded.
[2024-05-27 21:53:55] INFO:    [init] worker: 192.168.169.20
[2024-05-27 21:53:57] INFO:    [init] init worker 192.168.169.20 succeeded.
[2024-05-27 21:53:57] INFO:    [init] master: 192.168.169.20 set hostname and hosts
[2024-05-27 21:53:59] INFO:    [init] 192.168.169.20 set hostname and hosts succeeded.
[2024-05-27 21:53:59] INFO:    [install] install docker on 192.168.169.10.
[2024-05-27 21:56:17] INFO:    [install] install docker on 192.168.169.10 succeeded.
[2024-05-27 21:56:17] INFO:    [install] install kube on 192.168.169.10
[2024-05-27 21:56:18] INFO:    [install] install kube on 192.168.169.10 succeeded.
[2024-05-27 21:56:18] INFO:    [install] install docker on 192.168.169.20.
[2024-05-27 22:00:47] INFO:    [install] install docker on 192.168.169.20 succeeded.
[2024-05-27 22:00:47] INFO:    [install] install kube on 192.168.169.20
[2024-05-27 22:00:49] INFO:    [install] install kube on 192.168.169.20 succeeded.
[2024-05-27 22:00:49] INFO:    [kubeadm init] kubeadm init on 192.168.169.10
[2024-05-27 22:00:49] INFO:    [kubeadm init] 192.168.169.10: set kubeadm-config.yaml
[2024-05-27 22:00:49] INFO:    [kubeadm init] 192.168.169.10: set kubeadm-config.yaml succeeded.
[2024-05-27 22:00:49] INFO:    [kubeadm init] 192.168.169.10: kubeadm init start.
[2024-05-27 22:01:20] INFO:    [kubeadm init] 192.168.169.10: kubeadm init succeeded.
[2024-05-27 22:01:23] INFO:    [kubeadm init] 192.168.169.10: set kube config.
[2024-05-27 22:01:24] INFO:    [kubeadm init] 192.168.169.10: set kube config succeeded.
[2024-05-27 22:01:24] INFO:    [kubeadm init] 192.168.169.10: delete master taint
[2024-05-27 22:01:24] INFO:    [kubeadm init] 192.168.169.10: delete master taint succeeded.
[2024-05-27 22:01:25] INFO:    [kubeadm init] Auto-Approve kubelet cert csr succeeded.
[2024-05-27 22:01:25] INFO:    [kubeadm join] master: get join token and cert info
[2024-05-27 22:01:25] INFO:    [result] get CACRT_HASH value succeeded.
[2024-05-27 22:01:26] INFO:    [result] get INTI_CERTKEY value succeeded.
[2024-05-27 22:01:26] INFO:    [result] get INIT_TOKEN value succeeded.
[2024-05-27 22:01:26] INFO:    [kubeadm join] worker 192.168.169.20 join cluster.
[2024-05-27 22:01:40] INFO:    [kubeadm join] worker 192.168.169.20 join cluster succeeded.
[2024-05-27 22:01:40] INFO:    [kubeadm join] set 192.168.169.20 worker node role.
[2024-05-27 22:01:40] INFO:    [kubeadm join] set 192.168.169.20 worker node role succeeded.
[2024-05-27 22:01:40] INFO:    [network] add flannel network
[2024-05-27 22:01:41] INFO:    [calico] change flannel pod subnet succeeded.
[2024-05-27 22:01:41] INFO:    [apply] apply kube-flannel.yaml file
[2024-05-27 22:01:42] INFO:    [apply] apply kube-flannel.yaml file succeeded.
[2024-05-27 22:01:45] INFO:    [waiting] waiting kube-flannel-ds
[2024-05-27 22:01:46] INFO:    [waiting] kube-flannel-ds pods ready succeeded.
[2024-05-27 22:01:46] INFO:    [apply] apply coredns-cm.yaml file
[2024-05-27 22:01:47] INFO:    [apply] apply coredns-cm.yaml file succeeded.
[2024-05-27 22:01:47] INFO:    [apply] apply metrics-server.yaml file
[2024-05-27 22:01:48] INFO:    [apply] apply metrics-server.yaml file succeeded.
[2024-05-27 22:01:51] INFO:    [waiting] waiting metrics-server
[2024-05-27 22:02:01] INFO:    [waiting] metrics-server pods ready succeeded.
[2024-05-27 22:02:01] INFO:    [apply] apply dashboard.yaml file
[2024-05-27 22:02:02] INFO:    [apply] apply dashboard.yaml file succeeded.
[2024-05-27 22:02:05] INFO:    [waiting] waiting dashboard-agent
[2024-05-27 22:02:06] INFO:    [waiting] dashboard-agent pods ready succeeded.
[2024-05-27 22:02:09] INFO:    [waiting] waiting dashboard-en
[2024-05-27 22:02:09] INFO:    [waiting] dashboard-en pods ready succeeded.
[2024-05-27 22:02:24] INFO:    [cluster] kubernetes cluster status
+ kubectl get node -o wide
NAME               STATUS   ROLES                         AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8s-master-node1   Ready    control-plane,master,worker   70s   v1.22.1   192.168.169.10   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.8
k8s-worker-node1   Ready    worker                        49s   v1.22.1   192.168.169.20   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.8
+ kubectl get pods -A -o wide
NAMESPACE      NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE               NOMINATED NODE   READINESS GATES
dashboard-cn   dashboard-agent-cd88cf454-4m5ct            1/1     Running   0          23s   10.244.1.3       k8s-worker-node1   <none>           <none>
dashboard-cn   dashboard-cn-64bd46887f-dtkxv              1/1     Running   0          23s   10.244.1.2       k8s-worker-node1   <none>           <none>
dashboard-en   dashboard-en-55596d469-84ggw               1/1     Running   0          23s   10.244.1.4       k8s-worker-node1   <none>           <none>
kube-system    coredns-78fcd69978-b77bx                   1/1     Running   0          52s   10.244.0.2       k8s-master-node1   <none>           <none>
kube-system    coredns-78fcd69978-lnwxl                   1/1     Running   0          52s   10.244.0.3       k8s-master-node1   <none>           <none>
kube-system    etcd-k8s-master-node1                      1/1     Running   0          65s   192.168.169.10   k8s-master-node1   <none>           <none>
kube-system    kube-apiserver-k8s-master-node1            1/1     Running   0          65s   192.168.169.10   k8s-master-node1   <none>           <none>
kube-system    kube-controller-manager-k8s-master-node1   1/1     Running   0          65s   192.168.169.10   k8s-master-node1   <none>           <none>
kube-system    kube-flannel-ds-sfczm                      1/1     Running   0          43s   192.168.169.10   k8s-master-node1   <none>           <none>
kube-system    kube-flannel-ds-tzz9l                      1/1     Running   0          43s   192.168.169.20   k8s-worker-node1   <none>           <none>
kube-system    kube-proxy-gg64q                           1/1     Running   0          49s   192.168.169.20   k8s-worker-node1   <none>           <none>
kube-system    kube-proxy-p5thp                           1/1     Running   0          52s   192.168.169.10   k8s-master-node1   <none>           <none>
kube-system    kube-scheduler-k8s-master-node1            1/1     Running   0          68s   192.168.169.10   k8s-master-node1   <none>           <none>
kube-system    metrics-server-77564bc84d-5687x            1/1     Running   0          37s   192.168.169.10   k8s-master-node1   <none>           <none> See detailed log >> /var/log/kubeinstall.log

部署完成后查看集群状态

[root@localhost ~]# kubectl cluster-info
Kubernetes control plane is running at https://apiserver.cluster.local:6443
CoreDNS is running at https://apiserver.cluster.local:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@localhost ~]# kubectl get nodes
NAME               STATUS   ROLES                         AGE     VERSION
k8s-master-node1   Ready    control-plane,master,worker   2m35s   v1.22.1
k8s-worker-node1   Ready    worker                        2m14s   v1.22.1
[root@localhost ~]# kubectl get pods --all-namespaces
NAMESPACE      NAME                                       READY   STATUS    RESTARTS   AGE
dashboard-cn   dashboard-agent-cd88cf454-4m5ct            1/1     Running   0          117s
dashboard-cn   dashboard-cn-64bd46887f-dtkxv              1/1     Running   0          117s
dashboard-en   dashboard-en-55596d469-84ggw               1/1     Running   0          117s
kube-system    coredns-78fcd69978-b77bx                   1/1     Running   0          2m26s
kube-system    coredns-78fcd69978-lnwxl                   1/1     Running   0          2m26s
kube-system    etcd-k8s-master-node1                      1/1     Running   0          2m39s
kube-system    kube-apiserver-k8s-master-node1            1/1     Running   0          2m39s
kube-system    kube-controller-manager-k8s-master-node1   1/1     Running   0          2m39s
kube-system    kube-flannel-ds-sfczm                      1/1     Running   0          2m17s
kube-system    kube-flannel-ds-tzz9l                      1/1     Running   0          2m17s
kube-system    kube-proxy-gg64q                           1/1     Running   0          2m23s
kube-system    kube-proxy-p5thp                           1/1     Running   0          2m26s
kube-system    kube-scheduler-k8s-master-node1            1/1     Running   0          2m42s
kube-system    metrics-server-77564bc84d-5687x            1/1     Running   0          2m11s

查看节点负载情况

[root@localhost ~]# kubectl top nodes --use-protocol-buffers
NAME               CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master-node1   358m         17%    1171Mi          30%       
k8s-worker-node1   141m         7%     699Mi           18%

部署 KubeVirt 集群

[root@localhost ~]# kubeeasy add --virt kubevirt

以下是输出信息

[root@localhost ~]# kubeeasy add --virt kubevirt
[2024-05-27 22:07:04] INFO:    [start] bash kubeeasy add --virt kubevirt
[2024-05-27 22:07:04] INFO:    [check] sshpass command exists.
[2024-05-27 22:07:04] INFO:    [check] wget command exists.
[2024-05-27 22:07:04] INFO:    [check] conn apiserver succeeded.
[2024-05-27 22:07:05] INFO:    [virt] add kubevirt
[2024-05-27 22:07:05] INFO:    [apply] apply kubevirt-operator.yaml file
[2024-05-27 22:07:06] INFO:    [apply] apply kubevirt-operator.yaml file succeeded.
[2024-05-27 22:07:09] INFO:    [waiting] waiting kubevirt
[2024-05-27 22:07:16] INFO:    [waiting] kubevirt pods ready succeeded.
[2024-05-27 22:07:16] INFO:    [apply] apply kubevirt-cr.yaml file
[2024-05-27 22:07:17] INFO:    [apply] apply kubevirt-cr.yaml file succeeded.
[2024-05-27 22:07:50] INFO:    [waiting] waiting kubevirt
[2024-05-27 22:08:00] INFO:    [waiting] kubevirt pods ready succeeded.
[2024-05-27 22:08:04] INFO:    [waiting] waiting kubevirt
[2024-05-27 22:08:49] INFO:    [waiting] kubevirt pods ready succeeded.
[2024-05-27 22:08:52] INFO:    [waiting] waiting kubevirt
[2024-05-27 22:08:52] INFO:    [waiting] kubevirt pods ready succeeded.
[2024-05-27 22:08:52] INFO:    [apply] apply multus-daemonset.yaml file
[2024-05-27 22:08:54] INFO:    [apply] apply multus-daemonset.yaml file succeeded.
[2024-05-27 22:08:57] INFO:    [waiting] waiting kube-multus
[2024-05-27 22:08:58] INFO:    [waiting] kube-multus pods ready succeeded.
[2024-05-27 22:08:58] INFO:    [apply] apply multus-cni-macvlan.yaml file
[2024-05-27 22:09:01] INFO:    [apply] apply multus-cni-macvlan.yaml file succeeded.
[2024-05-27 22:09:01] INFO:    [cluster] kubernetes kubevirt status
+ kubectl get pod -n kubevirt -o wide
NAME                              READY   STATUS    RESTARTS   AGE    IP           NODE               NOMINATED NODE   READINESS GATES
virt-api-86f9d6d4f-4vdrq          1/1     Running   0          81s    10.244.0.5   k8s-master-node1   <none>           <none>
virt-api-86f9d6d4f-56t5x          1/1     Running   0          81s    10.244.1.7   k8s-worker-node1   <none>           <none>
virt-controller-54b79f5db-4dfhq   1/1     Running   0          53s    10.244.0.7   k8s-master-node1   <none>           <none>
virt-controller-54b79f5db-vflkh   1/1     Running   0          53s    10.244.1.8   k8s-worker-node1   <none>           <none>
virt-handler-4kzm9                1/1     Running   0          53s    10.244.1.9   k8s-worker-node1   <none>           <none>
virt-handler-rrsdz                1/1     Running   0          53s    10.244.0.6   k8s-master-node1   <none>           <none>
virt-operator-6fbd74566c-9nrj6    1/1     Running   0          115s   10.244.0.4   k8s-master-node1   <none>           <none>
virt-operator-6fbd74566c-vmwz7    1/1     Running   0          115s   10.244.1.5   k8s-worker-node1   <none>           <none> See detailed log >> /var/log/kubeinstall.log 

部署 Istio 服务网格

[root@localhost ~]# kubeeasy add --istio istio

以下是输出信息

[2024-05-27 22:09:53] INFO:    [start] bash kubeeasy add --istio istio
[2024-05-27 22:09:53] INFO:    [check] sshpass command exists.
[2024-05-27 22:09:53] INFO:    [check] wget command exists.
[2024-05-27 22:09:53] INFO:    [check] conn apiserver succeeded.
[2024-05-27 22:09:55] INFO:    [istio] add istio
✔ Istio core installed                                                                                                                                                     
✔ Istiod installed                                                                                                                                                         
✔ Egress gateways installed                                                                                                                                                
✔ Ingress gateways installed                                                                                                                                               
✔ Installation complete                                                                                                                                                    Making this installation the default for injection and validation.Thank you for installing Istio 1.12.  Please take a few minutes to tell us about your install/upgrade experience!  https://forms.gle/FegQbc9UvePd4Z9z7
[2024-05-27 22:10:23] INFO:    [waiting] waiting istio-egressgateway
[2024-05-27 22:10:23] INFO:    [waiting] istio-egressgateway pods ready succeeded.
[2024-05-27 22:10:26] INFO:    [waiting] waiting istio-ingressgateway
[2024-05-27 22:10:26] INFO:    [waiting] istio-ingressgateway pods ready succeeded.
[2024-05-27 22:10:29] INFO:    [waiting] waiting istiod
[2024-05-27 22:10:29] INFO:    [waiting] istiod pods ready succeeded.
[2024-05-27 22:10:33] INFO:    [waiting] waiting grafana
[2024-05-27 22:10:35] INFO:    [waiting] grafana pods ready succeeded.
[2024-05-27 22:10:38] INFO:    [waiting] waiting jaeger
[2024-05-27 22:10:38] INFO:    [waiting] jaeger pods ready succeeded.
[2024-05-27 22:10:41] INFO:    [waiting] waiting kiali
[2024-05-27 22:11:00] INFO:    [waiting] kiali pods ready succeeded.
[2024-05-27 22:11:03] INFO:    [waiting] waiting prometheus
[2024-05-27 22:11:03] INFO:    [waiting] prometheus pods ready succeeded.
[2024-05-27 22:11:03] INFO:    [cluster] kubernetes istio status
+ kubectl get pod -n istio-system -o wide
NAME                                   READY   STATUS    RESTARTS   AGE   IP            NODE               NOMINATED NODE   READINESS GATES
grafana-6ccd56f4b6-kg7zv               1/1     Running   0          34s   10.244.1.13   k8s-worker-node1   <none>           <none>
istio-egressgateway-7f4864f59c-kxfl4   1/1     Running   0          54s   10.244.1.12   k8s-worker-node1   <none>           <none>
istio-ingressgateway-55d9fb9f-m4dg8    1/1     Running   0          54s   10.244.1.11   k8s-worker-node1   <none>           <none>
istiod-555d47cb65-k6xkz                1/1     Running   0          61s   10.244.1.10   k8s-worker-node1   <none>           <none>
jaeger-5d44bc5c5d-gbgrk                1/1     Running   0          34s   10.244.1.14   k8s-worker-node1   <none>           <none>
kiali-9f9596d69-clcl8                  1/1     Running   0          34s   10.244.1.15   k8s-worker-node1   <none>           <none>
prometheus-64fd8ccd65-d9zqq            2/2     Running   0          34s   10.244.1.16   k8s-worker-node1   <none>           <none> See detailed log >> /var/log/kubeinstall.log

部署 Harbor 仓库

[root@localhost ~]# kubeeasy add --registry harbor

以下是输出信息

[2024-05-27 22:12:29] INFO:    [start] bash kubeeasy add --registry harbor
[2024-05-27 22:12:29] INFO:    [check] sshpass command exists.
[2024-05-27 22:12:29] INFO:    [check] wget command exists.
[2024-05-27 22:12:29] INFO:    [check] conn apiserver succeeded.
[2024-05-27 22:12:29] INFO:    [offline] unzip offline harbor package on local.
[2024-05-27 22:12:58] INFO:    [offline] installing docker-compose on local.
[2024-05-27 22:12:59] INFO:    [offline] Installing harbor on local.[Step 0]: checking if docker is installed ...Note: docker version: 20.10.14[Step 1]: checking docker-compose is installed ...Note: docker-compose version: 2.2.1[Step 2]: loading Harbor images ...[Step 3]: preparing environment ...[Step 4]: preparing harbor configs ...
prepare base dir is set to /opt/harbor
WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https
Generated configuration file: /config/portal/nginx.conf
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir[Step 5]: starting Harbor ...
[+] Running 10/10⠿ Network harbor_harbor        Created                                                                                                                               0.3s⠿ Container harbor-log         Started                                                                                                                               2.3s⠿ Container harbor-portal      Started                                                                                                                               9.5s⠿ Container registryctl        Started                                                                                                                               9.3s⠿ Container redis              Started                                                                                                                               8.4s⠿ Container registry           Started                                                                                                                               9.3s⠿ Container harbor-db          Started                                                                                                                               9.4s⠿ Container harbor-core        Started                                                                                                                              10.2s⠿ Container nginx              Started                                                                                                                              13.6s⠿ Container harbor-jobservice  Started                                                                                                                              13.3s
✔ ----Harbor has been installed and started successfully.----
[2024-05-27 22:16:30] INFO:    [cluster] kubernetes Harbor status
+ docker-compose -f /opt/harbor/docker-compose.yml ps
NAME                COMMAND                  SERVICE             STATUS              PORTS
harbor-core         "/harbor/entrypoint.…"   core                running (healthy)   
harbor-db           "/docker-entrypoint.…"   postgresql          running (healthy)   
harbor-jobservice   "/harbor/entrypoint.…"   jobservice          running (healthy)   
harbor-log          "/bin/sh -c /usr/loc…"   log                 running (healthy)   127.0.0.1:1514->10514/tcp
harbor-portal       "nginx -g 'daemon of…"   portal              running (healthy)   
nginx               "nginx -g 'daemon of…"   proxy               running (healthy)   0.0.0.0:80->8080/tcp, :::80->8080/tcp
redis               "redis-server /etc/r…"   redis               running (healthy)   
registry            "/home/harbor/entryp…"   registry            running (healthy)   
registryctl         "/home/harbor/start.…"   registryctl         running (healthy)    See detailed log >> /var/log/kubeinstall.log

查看资源

[root@k8s-master-node1 ~]# kubectl get pods -A
NAMESPACE      NAME                                       READY   STATUS    RESTARTS   AGE
dashboard-cn   dashboard-agent-cd88cf454-4m5ct            1/1     Running   0          17m
dashboard-cn   dashboard-cn-64bd46887f-7z59r              1/1     Running   0          55s
dashboard-en   dashboard-en-55596d469-84ggw               1/1     Running   0          17m
istio-system   grafana-6ccd56f4b6-kg7zv                   1/1     Running   0          9m17s
istio-system   istio-egressgateway-7f4864f59c-kxfl4       1/1     Running   0          9m37s
istio-system   istio-ingressgateway-55d9fb9f-m4dg8        1/1     Running   0          9m37s
istio-system   istiod-555d47cb65-k6xkz                    1/1     Running   0          9m44s
istio-system   jaeger-5d44bc5c5d-gbgrk                    1/1     Running   0          9m17s
istio-system   kiali-9f9596d69-clcl8                      1/1     Running   0          9m17s
istio-system   prometheus-64fd8ccd65-d9zqq                2/2     Running   0          9m17s
kube-system    coredns-78fcd69978-b77bx                   1/1     Running   0          18m
kube-system    coredns-78fcd69978-lnwxl                   1/1     Running   0          18m
kube-system    etcd-k8s-master-node1                      1/1     Running   0          18m
kube-system    kube-apiserver-k8s-master-node1            1/1     Running   0          18m
kube-system    kube-controller-manager-k8s-master-node1   1/1     Running   0          18m
kube-system    kube-flannel-ds-sfczm                      1/1     Running   0          18m
kube-system    kube-flannel-ds-tzz9l                      1/1     Running   0          18m
kube-system    kube-multus-ds-9h5bb                       1/1     Running   0          10m
kube-system    kube-multus-ds-rbnsp                       1/1     Running   0          10m
kube-system    kube-proxy-gg64q                           1/1     Running   0          18m
kube-system    kube-proxy-p5thp                           1/1     Running   0          18m
kube-system    kube-scheduler-k8s-master-node1            1/1     Running   0          18m
kube-system    metrics-server-77564bc84d-5687x            1/1     Running   0          17m
kubevirt       virt-api-86f9d6d4f-4vdrq                   1/1     Running   0          12m
kubevirt       virt-api-86f9d6d4f-56t5x                   1/1     Running   0          12m
kubevirt       virt-controller-54b79f5db-4dfhq            1/1     Running   0          11m
kubevirt       virt-controller-54b79f5db-vflkh            1/1     Running   0          11m
kubevirt       virt-handler-4kzm9                         1/1     Running   0          11m
kubevirt       virt-handler-rrsdz                         1/1     Running   0          11m
kubevirt       virt-operator-6fbd74566c-9nrj6             1/1     Running   0          12m
kubevirt       virt-operator-6fbd74566c-vmwz7             1/1     Running   0          12m

其他报错解决

查看磁盘空间是否够用

[root@k8s-master-node1 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 2.0G     0  2.0G   0% /dev
tmpfs                    2.0G     0  2.0G   0% /dev/shm
tmpfs                    2.0G   17M  2.0G   1% /run
tmpfs                    2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/mapper/centos-root   94G   25G   69G  27% /
/dev/sda1               1014M  138M  877M  14% /boot
/dev/mapper/centos-home  2.0G   33M  2.0G   2% /home
tmpfs                    394M     0  394M   0% /run/user/0
/dev/loop1               4.4G  4.4G     0 100% /mnt
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/b039687a23f8fae6c5187496752b40f9680b1176a6b450012da37205da799228/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/3aa2dd9ef3f5215fef69348e2915ae3fdede0172d10c87233ba5d495c7f3481c/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/b0d856449d7fd4be80f1dbab0d8eb7b6a12435ebc645d2bd67442ece1c957bfa/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/83779008afa2470bee75fcf40986a2b88677c070a142434d2afd24ccc6595c8c/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/a25f32a6997c8e8837e3fd53f8f49317ba00d3792fd2400eaeb37651ffc6adf1/mounts/shm
shm                       64M     0   64M   0% /var/lib/docker/containers/1bc9a2dcf244a4bab3f6b7b452220d57f84f751fde6d2343041e13752d43cb5c/mounts/shm
shm                       64M     0   64M   0% /var/lib/docker/containers/ad389f9f68ba4d07e1e230f0263e62c59b93de201ddf66969037cb5e3305d9b0/mounts/shm
shm                       64M     0   64M   0% /var/lib/docker/containers/3d907398b2cf132c4c712543c9b86fae338a984e52ddc1141bb60a9309c965b3/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/e19cf1eb10fcd855424dfd274e4da3302743ae22f5b30571203c5235e6c84f29/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/e4ca942935911960946ffb8f3b32f064c8e3696117f61c9ab4e079dfd7711d53/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/fe04aff4c51e8579db0c52d8313ea09aa19cefd9e4183d9fbfc7fe008fd85f0e/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/0ba8948cc724f519001633dbaee7bb4faa5b96e83bc47545c190d2040d7dc975/merged
tmpfs                    3.8G   12K  3.8G   1% /var/lib/kubelet/pods/ef02c2ea-55c0-470f-9d5f-fded20a23ca6/volumes/kubernetes.io~projected/kube-api-access-jb4jm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/971ecbd26e0fe7add7b25144508b1812284143fb62b0e8e8d059a2d25b1205b6/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/0e032d5f3aa5b4cb696eeb0cf160ea7180c223eec7adf485f6eb69cf2f60de0e/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/2874cd9c985d553a8617e52c4e37ce07185e2522df5366354da35eb36827dc8f/merged
tmpfs                     50M   12K   50M   1% /var/lib/kubelet/pods/92783df2-056d-403a-aa81-15c585d725bf/volumes/kubernetes.io~projected/kube-api-access-sbfd5
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/ba0551bd461d5077cfdaa4ecf2269558b1ecb116c21dc213d28aad352c36066c/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/ec37c7d0bb4a287200b840af79317ddf66e29efdee0e501fe69462faabe1e68d/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/2f2caf724cfce283b0770371e2d367b52adb6ebc0e81ca81a9741dbe9dece228/merged
tmpfs                    170M   12K  170M   1% /var/lib/kubelet/pods/a81ce6dc-f02a-4053-bf5d-92353bf0b260/volumes/kubernetes.io~projected/kube-api-access-q56s8
tmpfs                    170M   12K  170M   1% /var/lib/kubelet/pods/7666fe20-f005-408d-898c-6515f9c9e82b/volumes/kubernetes.io~projected/kube-api-access-dfl4j
tmpfs                    3.8G   12K  3.8G   1% /var/lib/kubelet/pods/8b80eafa-2b69-4727-a4b4-60b1601af7b2/volumes/kubernetes.io~projected/kube-api-access-fsvlh
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/16ef5f9dc887f97e540a1a71fbe8b11c06911b5d679a61d8bfe75e583510545c/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/47a4cdd51151d957850c7064def32e664d0d9f6432955db7a7e9eb5c5e53eda5/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/05d335b1ad2fa4bfc722351d75ccc90c459c996d9e991fd48a4b0b5135cb51ba/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/a2447dc63f5b6a7a6910b5d7cca2fdcd4ea29141659b130c8d02b5a83dc26197/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/ef6d7fea914b853924850dc9aa1ca536ab6f16d702d0031e7edd621fc5ffa460/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/90753d4fe75abf6a3e9296947175f9aaeccbc64ff5b6cc19d68a03e75f546767/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/61be0dd3df2fff2c28b7ca31510123e8bd9665068d8c6ad9a106bce71467566c/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/c1bc3abea4903c4b70db30436b2163fb9d8360b38fa7f7fe68c7b133f29a7c52/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/9b833c94fa3d68af7c543a776805fd2640c1572a45ba343b58c8214442968613/merged
tmpfs                    3.8G  8.0K  3.8G   1% /var/lib/kubelet/pods/b965e58c-5af4-4642-967d-c2478bd13933/volumes/kubernetes.io~secret/kubevirt-operator-certs
tmpfs                    3.8G   12K  3.8G   1% /var/lib/kubelet/pods/b965e58c-5af4-4642-967d-c2478bd13933/volumes/kubernetes.io~projected/kube-api-access-kxxt5
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/a3d7abde028b02ce103634320b26d6375c5fb2e3dd66a0d416276c7166941410/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/0178ab4dcec3e23eda1f48bff7764cd5df4ed8266eb88f745ad4a58343f016d9/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/2335318d42637beb79eb21832921ff085ae753aa1029ef17b570f2e962e8cfdd/merged
tmpfs                    3.8G  8.0K  3.8G   1% /var/lib/kubelet/pods/fa2af6c2-9912-464b-8d43-5fcd6473fc13/volumes/kubernetes.io~secret/kubevirt-virt-handler-certs
tmpfs                    3.8G   12K  3.8G   1% /var/lib/kubelet/pods/fa2af6c2-9912-464b-8d43-5fcd6473fc13/volumes/kubernetes.io~projected/kube-api-access-rspl9
tmpfs                    3.8G  8.0K  3.8G   1% /var/lib/kubelet/pods/fa2af6c2-9912-464b-8d43-5fcd6473fc13/volumes/kubernetes.io~secret/kubevirt-virt-api-certs
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/ef243c1f62650ac77de2525f199b52f3e63e82badb2465a2b47bddadbda95923/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/7acb351ec8eb07876ffe470d2a5d83ea260ae70e7aee4558db0ca755887f3e50/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/874c2b916a2b5f86a57a1c4fc68947119e0854e997996bf5345e39284b7cbaa1/merged
tmpfs                    3.8G  8.0K  3.8G   1% /var/lib/kubelet/pods/c76d1164-ad80-48fb-8f53-34e873d5e446/volumes/kubernetes.io~secret/kubevirt-virt-handler-certs
tmpfs                    3.8G  8.0K  3.8G   1% /var/lib/kubelet/pods/c76d1164-ad80-48fb-8f53-34e873d5e446/volumes/kubernetes.io~secret/kubevirt-virt-handler-server-certs
tmpfs                    3.8G  8.0K  3.8G   1% /var/lib/kubelet/pods/35153ae2-e931-45f8-a374-2d4da17fd354/volumes/kubernetes.io~secret/kubevirt-controller-certs
tmpfs                    3.8G   12K  3.8G   1% /var/lib/kubelet/pods/35153ae2-e931-45f8-a374-2d4da17fd354/volumes/kubernetes.io~projected/kube-api-access-28wt5
tmpfs                    3.8G   12K  3.8G   1% /var/lib/kubelet/pods/c76d1164-ad80-48fb-8f53-34e873d5e446/volumes/kubernetes.io~projected/kube-api-access-pckgc
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/0f5b9d0761baaf31c5ce519f507295c2cb8c14e5791403f5a65de5c69d2e5ae1/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/f0a64a8405ebfffb980d40a4e0d9c829000d7a4bbe1f1712705a6a84b357e5e1/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/f116e463a9f43d90a24f6c719fcbef9bc18494c1680065cde6b83aa89a73533d/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/5383598d8af1938aef830ba4c68ce26116f0b1e2ec2b40abf53c0268fa0f9fcb/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/f6f05621a5cb6f361ccd023773ee27a680da6d3a7f44627adfa3ba13c9969ba2/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/ed10477d475aa46d65ba380fcef6f6d98d82d7686263e6bd5e95ebdf9f48e3da/merged
tmpfs                     50M   12K   50M   1% /var/lib/kubelet/pods/a327cf65-4ace-4281-8b75-e3badd0b912a/volumes/kubernetes.io~projected/kube-api-access-dkrbx
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/9b33e3c7aed5bdce53b286b1e342078ca9bd1602bfcf561e9d2ffa4c8cd655cd/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/091e2157f089eb0747e63f4e4b4282f9d40fc45c90d3a73ec4e187aa38f419cd/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/5a471abc372438b63f23ef5ea8fd82a1f01db73afb3589c4b01a76e04b1857ad/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/af21221686a4ae59ff55c16945cf649004eeecf9e1333a56a1a99a91e91ebd65/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/2f60c35ae8573c9cbad08fb26d1ec676fb156d869af2c7b533e84c7503c30c34/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/59b1196c741f5aa612fdbfad9a4c04b937f36648f24ac3d4ee6acab567152d67/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/1604976bc8461cfb835ab152ccf0219a108cd2032d2e5c3a777b189a3953acca/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/96343f1ed6280bfbc9e465165762ba8394a5abf223d20ba4c74c6cf99727eb44/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/92e58a653625ee2e93c6ac90d69e996af6d306e16637825203cd4ebe7511a7d4/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/a6f9001a599dc66c3d9de23e6708bd6b3ff685a69b44b1c666d8ff9f0b28050b/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/ad9e487aaf165f4950d430b0fd95db9d45d515dbbfae24d7338b0922b8cf56bb/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/f689a5ad2eb8cba138142620f27a0c5968ca36886f37698945018e15e0b96965/merged
tmpfs                    3.8G   12K  3.8G   1% /var/lib/kubelet/pods/7b683571-c0de-45bd-9ec4-c2eac9755c69/volumes/kubernetes.io~projected/kube-api-access-gztdj
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/dd459ae9c7ce30526e1a0d68e2ab5cce5b6ad1e353d91a7a7ab5f9dc6c40b2d8/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/d94be22f84c576ee1daa39a03f60efddd60e46543a2b4eb5d536621855c8e19b/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/0b431cf493569d862eee88343dc82bf8fc8541aaa43e8f182b12205e45026e66/merged

http://www.ppmy.cn/news/1463676.html

相关文章

自定义原生小程序顶部及获取胶囊信息

需求&#xff1a;我需要将某个文字或者按钮放置在小程序顶部位置 思路&#xff1a;根据获取到的顶部信息来定义我需要放的这个元素样式 * 这里我是定义某个指定页面 json&#xff1a;给指定页面的json中添加自定义设置 "navigationStyle": "custom" JS&am…

基于Kafka的日志采集

目录 前言 架构图 资源列表 基础环境 关闭防护墙 关闭内核安全机制 修改主机名 添加hosts映射 一、部署elasticsearch 修改limit限制 部署elasticsearch 修改配置文件 启动 二、部署filebeat 部署filebeat 添加配置文件 启动 三、部署kibana 部署kibana 修…

操作系统实验--终极逃课方法

找到图片里的这个路径下的文件 &#xff0c;结合当前题目名称&#xff0c;把文件内容全部删除&#xff0c;改为print print的内容为下图左下角的预期输出的内容

一个普通双非女生的秋招之路

大家好&#xff0c;我是小布丁。 先简单地做个自我介绍&#xff1a; 我今年本科毕业于某双非院校&#xff08;属于那种没什么人听说过的小学校&#xff09;&#xff0c;学的是计算机专业&#xff0c;英语四级水平&#xff08;没办法&#xff0c;六级确实没过&#xff09;。我本…

LIO-EKF: High Frequency LiDAR-Inertial Odometry using Extended Kalman Filters

一、论文摘要 里程计估计是每个需要在未知环境中导航的自主系统的关键要素。在现代移动机器人中&#xff0c;3D LiDAR 惯性系统通常用于执行此任务。通过融合 LiDAR 扫描和 IMU 测量&#xff0c;这些系统可以减少因顺序注册各个 LiDAR 扫描而引起的累积漂移&#xff0c;并提供稳…

开源大模型与闭源大模型:技术哲学的较量

目录 前言一、 开源大模型的优势1. 社区支持与合作1.1 全球协作网络1.2 快速迭代与创新1.3 共享最佳实践 2. 透明性与可信赖性2.1 审计与验证2.2 减少偏见与错误2.3 安全性提升 3. 低成本与易访问性3.1 降低研发成本3.2 易于定制化3.3 教育资源丰富 4. 促进标准化5. 推动技术进…

Windows 10 IoT Enterprise 2019 LTSC High End OEM Software 详细介绍

Windows 10 IoT Enterprise 2019 LTSC High End OEM Software 是微软推出的一个专门为嵌入式和物联网&#xff08;IoT&#xff09;设备设计的操作系统版本。以下是对该版本系统的详细介绍&#xff1a; 一、版本简介 Windows 10 IoT Enterprise 是 Windows 10 的一个变体&…

【编译原理复习笔记】语法分析(二)

自底向上的语法分析 从分析树的底部&#xff08;叶节点&#xff09;向顶部&#xff08;根结点&#xff09;方向构造分析树 自顶向下的语法分析采用最左推导&#xff0c;与之对应自底向上采用最左规约&#xff08;反响构造最右推导&#xff09; 移入规约分析 &#xff08;1&a…