Kubernetes基础(三十)-imagefs与nodefs限制

ops/2024/11/14 6:26:29/

kubelet可以对磁盘进行管控,但是只能对nodefs与imagefs这两个分区进行管控。其中

  • imagefs: docker安装目录所在的分区
  • nodefs: kubelet的启动参数--root-dir所指定的目录(默认/var/lib/kubelet)所在的分区

接下来,我们来验证一下我们对imagefs与nodefs的理解。

前置条件

k8s集群使用1.8.6版本:

$ kubectl get node
NAME             STATUS                     ROLES     AGE       VERSION
10.142.232.161   Ready                      <none>    263d      v1.8.6
10.142.232.162   NotReady                   <none>    263d      v1.8.6
10.142.232.163   Ready,SchedulingDisabled   <none>    227d      v1.8.6

10.142.232.161上docker安装在/app/docker目录下,kubelet的--root-dir没有设置,使用默认的/var/lib/kubelet。/app是一块盘,使用率为70%;/是一块盘,使用率为57%;而imagesfs与nodefs此时设置的阈值都为80%,如下:

$ df -hT
文件系统                类型      容量  已用  可用 已用% 挂载点
devtmpfs                devtmpfs   16G     0   16G    0% /dev
tmpfs                   tmpfs      16G     0   16G    0% /dev/shm
tmpfs                   tmpfs      16G  1.7G   15G   11% /run
tmpfs                   tmpfs      16G     0   16G    0% /sys/fs/cgroup
/dev/mapper/centos-root xfs        45G   26G   20G   57% /
/dev/xvda1              xfs       497M  254M  243M   52% /boot
/dev/xvde               xfs       150G  105G   46G   70% /app$ ps -ef | grep kubelet
root     125179      1 37 17:50 ?        00:00:01 /usr/bin/kubelet --address=0.0.0.0 --allow-privileged=true --cluster-dns=10.254.0.10 --cluster-domain=kube.local --fail-swap-on=false --hostname-override=10.142.232.161 --kubeconfig=/etc/kubernetes/kubeconfig --pod-infra-container-image=10.142.233.76:8021/library/pause:latest --port=10250 --enforce-node-allocatable=pods --eviction-hard=memory.available<20%,nodefs.inodesFree<20%,imagefs.inodesFree<20%,nodefs.available<20%,imagefs.available<20% --network-plugin=cni

此时,10.142.232.161该node没有报磁盘的错

$ kubectl describe node 10.142.232.161
...
Events:Type     Reason                   Age                 From                     Message----     ------                   ----                ----                     -------Normal   Starting                 18s                 kubelet, 10.142.232.161  Starting kubelet.Normal   NodeAllocatableEnforced  18s                 kubelet, 10.142.232.161  Updated Node Allocatable limit across podsNormal   NodeHasSufficientDisk    18s                 kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeHasSufficientDiskNormal   NodeHasSufficientMemory  18s                 kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeHasSufficientMemoryNormal   NodeHasNoDiskPressure    18s                 kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeHasNoDiskPressureNormal   NodeNotReady             18s                 kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeNotReadyNormal   NodeReady                8s                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeReady

验证方案

  • 验证imagefs是/app/docker目录所在分区(/app分区使用率为70%)
    • 修改imagefs的阈值为60%,node应该报imagefs超标
    • 修改imagefs的阈值为80%,node应该正常
  • 验证nodefs是/var/lib/kubelet目录所在的分区(/分区使用率为57%)
    • 修改nodefs的阈值为50%,node应该报nodefs超标
    • 修改nodefs的阈值为60%,node应该正常
  • 修改kubelet启动参数--root-dir,将值设成/app/kubelet
    • 修改让imagefs的阈值为80%,nodefs的阈值为60%;此时应该报nodefs超标
    • 修改让imagefs的阈值为60%,nodefs的阈值为80%;此时应该报imagefs超标
    • 修改让imagefs的阈值为60%,nodefs的阈值为60%;此时应该报两个都超标
    • 修改让imagefs的阈值为80%,nodefs的阈值为80%;此时node应该正常

验证步骤

一、验证imagefs是/app/docker目录所在分区

1.1 修改imagefs的阈值为60%,node应该imagefs超标

如下,我们把imagefs的阈值设为60%

$ ps -ef | grep kubelet
root      41234      1 72 18:17 ?        00:00:02 /usr/bin/kubelet --address=0.0.0.0 --allow-privileged=true --cluster-dns=10.254.0.10 --cluster-domain=kube.local --fail-swap-on=false --hostname-override=10.142.232.161 --kubeconfig=/etc/kubernetes/kubeconfig --pod-infra-container-image=10.142.233.76:8021/library/pause:latest --port=10250 --enforce-node-allocatable=pods --eviction-hard=memory.available<20%,nodefs.inodesFree<20%,imagefs.inodesFree<20%,nodefs.available<20%,imagefs.available<40% --network-plugin=cni

然后我们查看节点的状态,Attempting to reclaim imagefs,意思为尝试回收imagefs

$ kubectl describe node 10.142.232.161
...Normal   NodeAllocatableEnforced  1m                  kubelet, 10.142.232.161  Updated Node Allocatable limit across podsNormal   Starting                 1m                  kubelet, 10.142.232.161  Starting kubelet.Normal   NodeHasSufficientDisk    1m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeHasSufficientDiskNormal   NodeHasSufficientMemory  1m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeHasSufficientMemoryNormal   NodeHasNoDiskPressure    1m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeHasNoDiskPressureNormal   NodeNotReady             1m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeNotReadyNormal   NodeHasDiskPressure      1m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeHasDiskPressureNormal   NodeReady                1m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeReadyWarning  EvictionThresholdMet     18s (x4 over 1m)    kubelet, 10.142.232.161  Attempting to reclaim imagefs

1.2 修改imagefs的阈值为80%,node应该正常

我们把imagefs的阈值为80%

$ ps -ef | grep kubelet
root      51402      1 19 18:24 ?        00:00:06 /usr/bin/kubelet --address=0.0.0.0 --allow-privileged=true --cluster-dns=10.254.0.10 --cluster-domain=kube.local --fail-swap-on=false --hostname-override=10.142.232.161 --kubeconfig=/etc/kubernetes/kubeconfig --pod-infra-container-image=10.142.233.76:8021/library/pause:latest --port=10250 --enforce-node-allocatable=pods --eviction-hard=memory.available<20%,nodefs.inodesFree<20%,imagefs.inodesFree<20%,nodefs.available<20%,imagefs.available<20% --network-plugin=cni

然后再来查看node的状态,NodeHasNoDiskPressure,说明imagefs使用率没有超过阈值了

$ kubectl describe node 10.142.232.161
...Warning  EvictionThresholdMet     6m (x22 over 11m)   kubelet, 10.142.232.161  Attempting to reclaim imagefsNormal   Starting                 5m                  kubelet, 10.142.232.161  Starting kubelet.Normal   NodeAllocatableEnforced  5m                  kubelet, 10.142.232.161  Updated Node Allocatable limit across podsNormal   NodeHasSufficientDisk    5m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeHasSufficientDiskNormal   NodeHasSufficientMemory  5m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeHasSufficientMemoryNormal   NodeHasNoDiskPressure    5m (x2 over 5m)     kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeHasNoDiskPressureNormal   NodeNotReady             5m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeNotReadyNormal   NodeReady                4m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeReady

二、验证nodefs是/var/lib/kubelet目录所在的分区(/分区使用率为57%)

2.1 修改nodefs的阈值为50%,node应该报nodefs超标

修改nodefs的阈值为50%

$ ps -ef | grep kubelet
root      72575      1 59 18:35 ?        00:00:04 /usr/bin/kubelet --address=0.0.0.0 --allow-privileged=true --cluster-dns=10.254.0.10 --cluster-domain=kube.local --fail-swap-on=false --hostname-override=10.142.232.161 --kubeconfig=/etc/kubernetes/kubeconfig --pod-infra-container-image=10.142.233.76:8021/library/pause:latest --port=10250 --enforce-node-allocatable=pods --eviction-hard=memory.available<20%,nodefs.inodesFree<20%,imagefs.inodesFree<20%,nodefs.available<50%,imagefs.available<20% --network-plugin=cni

查看node的状态,报Attempting to reclaim nodefs,意思是尝试回收nodefs,也就是nodefs超标了

$ kubectl describe node 10.142.232.161
...Normal   Starting                 1m                  kubelet, 10.142.232.161  Starting kubelet.Normal   NodeAllocatableEnforced  1m                  kubelet, 10.142.232.161  Updated Node Allocatable limit across podsNormal   NodeHasSufficientDisk    1m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeHasSufficientDiskNormal   NodeHasSufficientMemory  1m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeHasSufficientMemoryNormal   NodeHasNoDiskPressure    1m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeHasNoDiskPressureNormal   NodeNotReady             1m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeNotReadyNormal   NodeHasDiskPressure      53s                 kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeHasDiskPressureNormal   NodeReady                53s                 kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeReadyWarning  EvictionThresholdMet     2s (x5 over 1m)     kubelet, 10.142.232.161  Attempting to reclaim nodefs

2.2 修改nodefs的阈值为60%,node应该正常

修改nodefs的阈值为60%

$ ps -ef | grep kubelet
root      78664      1 31 18:38 ?        00:00:02 /usr/bin/kubelet --address=0.0.0.0 --allow-privileged=true --cluster-dns=10.254.0.10 --cluster-domain=kube.local --fail-swap-on=false --hostname-override=10.142.232.161 --kubeconfig=/etc/kubernetes/kubeconfig --pod-infra-container-image=10.142.233.76:8021/library/pause:latest --port=10250 --enforce-node-allocatable=pods --eviction-hard=memory.available<20%,nodefs.inodesFree<20%,imagefs.inodesFree<20%,nodefs.available<40%,imagefs.available<20% --network-plugin=cni

此时查看node的状态,已正常

$ kubectl describe node 10.142.232.161
...Normal   Starting                 2m                  kubelet, 10.142.232.161  Starting kubelet.Normal   NodeReady                1m                  kubelet, 10.142.232.161  Node 10.142.232.161 status is now: NodeReady

三、修改kubelet启动参数--root-dir,将值设成/app/kubelet

以下几个参数的默认值都与/var/lib/kubelet有关

--root-dir  # 默认值为 /var/lib/kubelet
--seccomp-profile-root  # 默认值为 /var/lib/kubelet/seccomp
--cert-dir  # 默认值为 /var/lib/kubelet/pki
--kubeconfig  # 默认值为 /var/lib/kubelet/kubeconfig

为了能够不再使用/var/lib/kubelet这个目录,我们需要对这四个参数显示设置。设置如下:

--root-dir=/app/kubelet
--seccomp-profile-root=/app/kubelet/seccomp
--cert-dir=/app/kubelet/pki
--kubeconfig=/etc/kubernetes/kubeconfig

3.1 修改让imagefs的阈值为80%,nodefs的阈值为60%;此时应该报nodefs超标

$ ps -ef | grep kubelet
root      14423      1 10 19:28 ?        00:00:34 /usr/bin/kubelet --address=0.0.0.0 --allow-privileged=true --cluster-dns=10.254.0.10 --cluster-domain=kube.local --fail-swap-on=false --hostname-override=10.142.232.161 --kubeconfig=/etc/kubernetes/kubeconfig --pod-infra-container-image=10.142.233.76:8021/library/pause:latest --port=10250 --enforce-node-allocatable=pods --eviction-hard=memory.available<20%,nodefs.inodesFree<20%,imagefs.inodesFree<20%,nodefs.available<40%,imagefs.available<20% --root-dir=/app/kubelet --seccomp-profile-root=/app/kubelet/seccomp --cert-dir=/app/kubelet/pki --network-plugin=cni

查看节点的状态,只报Attempting to reclaim nodefs,也就是说nodefs超标

$ kubectl describe node 10.142.232.161
...Normal   NodeHasDiskPressure      3m                  kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeHasDiskPressureNormal   NodeReady                3m                  kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeReadyNormal   Starting                 3m                  kube-proxy, 10.142.232.161  Starting kube-proxy.Warning  EvictionThresholdMet     27s (x15 over 3m)   kubelet, 10.142.232.161     Attempting to reclaim nodefs

3.2 修改让imagefs的阈值为60%,nodefs的阈值为80%;此时应该报imagefs超标

$ ps -ef |grep kubelet
root      21381      1 30 19:36 ?        00:00:02 /usr/bin/kubelet --address=0.0.0.0 --allow-privileged=true --cluster-dns=10.254.0.10 --cluster-domain=kube.local --fail-swap-on=false --hostname-override=10.142.232.161 --kubeconfig=/etc/kubernetes/kubeconfig --pod-infra-container-image=10.142.233.76:8021/library/pause:latest --port=10250 --enforce-node-allocatable=pods --eviction-hard=memory.available<20%,nodefs.inodesFree<20%,imagefs.inodesFree<20%,nodefs.available<20%,imagefs.available<40% --root-dir=/app/kubelet --seccomp-profile-root=/app/kubelet/seccomp --cert-dir=/app/kubelet/pki --network-plugin=cni

我们查看node的状态,只报imagefs超标

$ kubectl describe node 10.142.232.161
...Normal   Starting                 1m                 kubelet, 10.142.232.161     Starting kubelet.Normal   NodeAllocatableEnforced  1m                 kubelet, 10.142.232.161     Updated Node Allocatable limit across podsNormal   NodeHasSufficientDisk    1m                 kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeHasSufficientDiskNormal   NodeNotReady             1m                 kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeNotReadyNormal   NodeHasNoDiskPressure    1m (x2 over 1m)    kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeHasNoDiskPressureNormal   NodeHasSufficientMemory  1m                 kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeHasSufficientMemoryNormal   NodeReady                1m                 kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeReadyNormal   NodeHasDiskPressure      1m                 kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeHasDiskPressureWarning  EvictionThresholdMet     11s (x5 over 1m)   kubelet, 10.142.232.161     Attempting to reclaim imagefs

3.3 修改让imagefs的阈值为60%,nodefs的阈值为60%;此时应该报两个都超标

$ ps -ef | grep kubelet
root      24524      1 33 19:39 ?        00:00:01 /usr/bin/kubelet --address=0.0.0.0 --allow-privileged=true --cluster-dns=10.254.0.10 --cluster-domain=kube.local --fail-swap-on=false --hostname-override=10.142.232.161 --kubeconfig=/etc/kubernetes/kubeconfig --pod-infra-container-image=10.142.233.76:8021/library/pause:latest --port=10250 --enforce-node-allocatable=pods --eviction-hard=memory.available<20%,nodefs.inodesFree<20%,imagefs.inodesFree<20%,nodefs.available<40%,imagefs.available<40% --root-dir=/app/kubelet --seccomp-profile-root=/app/kubelet/seccomp --cert-dir=/app/kubelet/pki --network-plugin=cni

我们查看node的状态,果然imagefs与nodefs都超标了

$ kubectl describe node 10.142.232.161
...Normal   Starting                 1m                 kubelet, 10.142.232.161     Starting kubelet.Normal   NodeAllocatableEnforced  1m                 kubelet, 10.142.232.161     Updated Node Allocatable limit across podsNormal   NodeHasSufficientDisk    1m                 kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeHasSufficientDiskNormal   NodeHasSufficientMemory  1m                 kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeHasSufficientMemoryNormal   NodeHasNoDiskPressure    1m (x2 over 1m)    kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeHasNoDiskPressureNormal   NodeNotReady             1m                 kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeNotReadyNormal   NodeHasDiskPressure      1m                 kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeHasDiskPressureNormal   NodeReady                1m                 kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeReadyWarning  EvictionThresholdMet     14s                kubelet, 10.142.232.161     Attempting to reclaim imagefsWarning  EvictionThresholdMet     4s (x8 over 1m)    kubelet, 10.142.232.161     Attempting to reclaim nodefs

3.4 修改让imagefs的阈值为80%,nodefs的阈值为80%;此时node应该正常

$ ps -ef | grep kubelet
root      27869      1 30 19:43 ?        00:00:01 /usr/bin/kubelet --address=0.0.0.0 --allow-privileged=true --cluster-dns=10.254.0.10 --cluster-domain=kube.local --fail-swap-on=false --hostname-override=10.142.232.161 --kubeconfig=/etc/kubernetes/kubeconfig --pod-infra-container-image=10.142.233.76:8021/library/pause:latest --port=10250 --enforce-node-allocatable=pods --eviction-hard=memory.available<20%,nodefs.inodesFree<20%,imagefs.inodesFree<20%,nodefs.available<20%,imagefs.available<20% --root-dir=/app/kubelet --seccomp-profile-root=/app/kubelet/seccomp --cert-dir=/app/kubelet/pki --network-plugin=cni

我们查看node的状态,果然没有报imagefs与nodefs的错了

$ kubectl decribe node 10.142.232.161
...Normal   Starting                 1m                  kubelet, 10.142.232.161     Starting kubelet.Normal   NodeHasSufficientDisk    1m                  kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeHasSufficientDiskNormal   NodeHasSufficientMemory  1m                  kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeHasSufficientMemoryNormal   NodeNotReady             1m                  kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeNotReadyNormal   NodeAllocatableEnforced  1m                  kubelet, 10.142.232.161     Updated Node Allocatable limit across podsNormal   NodeReady                1m                  kubelet, 10.142.232.161     Node 10.142.232.161 status is now: NodeReady

总结

1、nodefs是--root-dir目录所在分区,imagefs是docker安装目录所在的分区
2、建议nodefs与imagefs共用一个分区,但是这个分区要设置的大一些。
3、当nodefs与imagefs共用一个分区时,kubelet中的其他几个参数--root-dir、--cert-dir


http://www.ppmy.cn/ops/34078.html

相关文章

18_Scala面向对象编程trait

文章目录 trait1.定义trait2.向类中混入特质2.1没有父类2.2有父类 3.动态混入3.1动态混入查询功能到公司业务中 4.父类&#xff0c;子类&#xff0c;特质初始化优先级5.Scala功能执行顺序6.常用API trait –特质的学习需要类比Java中的接口&#xff0c;源码编译之后就是interf…

SQL注入基础-5

一、Access注入 1、asp网站常用数据库&#xff1a;access&#xff0c;mssql 2、access数据库 (1)没有库&#xff0c;没有端口 (2)结构&#xff1a;表--》字段--》数据 3、注入流程&#xff1a; 判断类型判断表名&#xff1a;遍历、爆破判断列名判断列名下的数据长度查出数…

Crossplane 实战:构建统一的云原生控制平面

1 什么是 Crossplane Crossplane 是一个开源的 Kubernetes 扩展&#xff0c;其核心目标是将 Kubernetes 转化为一个通用的控制平面&#xff0c;使其能够管理和编排分布于 Kubernetes 集群内外的各种资源。通过扩展 Kubernetes 的功能&#xff0c;Crossplane 对 Kubernetes 集群…

DS:顺序表、单链表的相关OJ题训练(1)

欢迎各位来到 Harper.Lee 的学习小世界&#xff01; 博主主页传送门&#xff1a;Harper.Lee的博客主页 想要一起进步的uu可以来后台找我交流哦&#xff01; 在DS&#xff1a;单链表的实现 和 DS&#xff1a;顺序表的实现这两篇文章中&#xff0c;我详细介绍了顺序表和单链表的…

【protobuf】protobuf 开发 (二)

紧接着上一篇文章https://blog.csdn.net/qq_37387199/article/details/137890740 获取丢失的代码 拿到丢失的源代码需要去 Google 的 protobuf GitHub 仓库&#xff0c;地址在 https://github.com/protocolbuffers/protobuf 可以下载压缩包&#xff0c;也可以使用 Git 克隆。…

Mac基于Docker-ubuntu构建c/c++编译环境

编译环境安装和使用被充分验证&#xff0c;如有期望补充的内容欢迎留言评论。 目录 前言 Docker desktop下载安装 修改镜像源 选择ubuntu镜像 docker容器启动 参数说明: 宿主机与docker容器文件共享 宿主机与docker容器拷贝文件 为 Ubuntu 配置 ssh、vim、make 相关工…

Qt 绘图(学习记录)

1. QT 绘图中创建画家类 QPainter提供了高度优化的功能&#xff0c;以完成大多数绘图GUI程序所需的功能。它可以画任何东西&#xff0c;从简单的线条到复杂的形状&#xff0c;如馅饼和和弦。它还可以绘制对齐的文本和像素图。通常&#xff0c;它绘制一个“自然”坐标系&#…

【Micropython Pitaya Lite教程】烧录固件

文章目录 前言一、编译固件源码二、烧录固件总结 前言 MicroPython是一种精简的Python 3解释器&#xff0c;可以在微控制器和嵌入式系统上运行。Pitaya Lite是一款基于ESP32的微控制器开发板&#xff0c;它结合了低功耗、Wi-Fi和蓝牙功能。结合MicroPython和Pitaya Lite&#…