k8s问题记录与解决

news/2024/10/22 19:22:40/

一、问题:error: open /var/lib/kubelet/config.yaml: no such file or directory
  解决:关键文件缺失,多发生于没有做 kubeadm init就运行了systemctl start kubelet。 要先成功运行kubeadm init

二、kubelet.service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. Refusing.
  解决:打开/etc/systemd/system/kubelet.service.d/10-kubeadm.conf 中的配置:
    [root@k8s-master ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    # Note: This dropin only works with kubeadm and kubelet v1.11+
    [Service]
    Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
    Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
    # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
    EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
    # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
    # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
    EnvironmentFile=-/etc/sysconfig/kubelet
    要打开注释此# ExecStart=
    ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

三、journalctl -f -u kubelet (-f是 --follow, -u是过滤出kubelet日志)
    centos7 查看日志

四、kubeadm安装的k8s,重新安装k8s-mst
    检查 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 中的配置
    systemctl restart kubelet
    kubeadm reset
    kubeadm init --kubernetes-version=v1.13.4 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
    其他节点执行
    kubeadm reset
    kubeadm join ...命令

五、kubeadm 生成的token过期后,集群增加节点
    参考:https://www.jianshu.com/p/a5e379638577

六、[reset] unmounting mounted directories in "/var/lib/kubelet" 卡主
    重启机器,重新执行命令

-------------------20190518 update-------------------

七、master所在虚机重启后,kube-master1 kubelet[34770]: E0419 13:52:09.511348   34770 kubelet.go:2266] node "kube-master1" not found,并发现获取到的ip地址为空,ifconfig命令去查看网卡配置情况,却发现根本没有配置eth0/ens33网卡

解决:依次systemctl stop kubelet、systemctl stop docker、systemctl restart network、systemctl restart docker、systemctl restart kubelet、ifconfig、kubectl get node

-------------------20190614 update-------------------

八、kubelet, k8s-node1  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1b7fb9d83e89dbe2815cc10fb1daf342162cb74da30568c0a59585e1dc9329a4" network for pod "wxapp-redis-1": NetworkPlugin cni failed to set up pod "wxapp-redis-1_pnup" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24

解决:到有问题的机器,执行如下命令:

[root@k8s-node1 redis]# cd /var/lib/cni/flannel/
[root@k8s-node1 flannel]# ll
total 32
-rw-------. 1 root root 206 Apr 19 11:30 0727fe1a742f28b9a5d5d3188496bdc0aec220599caf6ce8c28f1b9c8ef1b8d4
-rw-------. 1 root root 206 Apr 19 14:30 13776cebe870d3f58982a123e2e32a4a89780c421db1bc425f13f13756822f81
-rw-------. 1 root root 206 Apr 19 11:30 1f249dd31dae8177a4fa5d3009eec7a36a4dccd8a836975f1d798adf43afda51
-rw-------. 1 root root 206 Apr 19 14:34 317e1ce2f78fddd1b36879be5dff169d27fefd4ca191dcd9d85781ab65cc14d8
-rw-------. 1 root root 187 Jun 14 10:05 5b104d16ea2042bd67f8958b8042fff01de3ba6a69f1a62f4b3ded81955c24bb
-rw-------. 1 root root 206 Apr 23 15:30 5bfe39c5fb73ecb09b7343260b8fc2526bb99b7d00a216ab0c76d87d247f3bc0
-rw-------. 1 root root 187 Jun 14 10:05 85de9489e5a4b5091bc40b5dc216a9f15fdb9aa077a28df32cfef97f8abd0c81
-rw-------. 1 root root 206 Apr 19 11:30 db015f887bbedf2ad7731aaa4a321183594a1d2c9b95398bb25f61ad5b052092
[root@k8s-node1 flannel]# systemctl stop docker
[root@k8s-node1 flannel]# systemctl stop kubelet
[root@k8s-node1 flannel]# systemctl stop kube-proxy

Failed to stop kube-proxy.service: Unit kube-proxy.service not loaded.
[root@k8s-node1 flannel]# rm -rf /var/lib/cni/flannel/ && rm -rf /var/lib/cni/networks/cbr0/ && ip link delete cni0
[root@k8s-node1 flannel]# rm -rf /var/lib/cni/networks/cni0/*
[root@k8s-node1 flannel]# systemctl start docker
[root@k8s-node1 flannel]# systemctl start kubelet

-------------------20190622 update-------------------

九、转(kubernetes --> kube-dns 安装 https://blog.csdn.net/zhuchuangang/article/details/76093887 https://www.cnblogs.com/chimeiwangliang/p/8809280.html)

十、转(kubernetes中网络报错问题排查 http://www.mamicode.com/info-detail-2315259.html)

-------------------20200523 update-------------------

十一、May 23 06:21:59 master dockerd-current[14690]: time="2020-05-23T06:21:59.555312805-04:00" level=error msg="Create container failed with error: oci runtime error: container_linux.go:235: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"Cannot set property TasksAccounting, or unknown property.\\\"\"\n"
May 23 06:21:59 master kubelet[21888]: E0523 06:21:59.654065   21888 kubelet.go:2266] node "master" not found

May 23 06:21:59 master kubelet[21888]: E0523 06:21:59.754397   21888 kubelet.go:2266] node "master" not found
May 23 06:21:59 master kubelet[21888]: E0523 06:21:59.855185   21888 kubelet.go:2266] node "master" not found
Docker创建容器报错:Cannot set property TasksAccounting, or unknown property.
最近又新配了一个服务器,想用docker简单的配置一下mysql,没想到创建容器时报错:
Error response from daemon: oci runtime error: container_linux.go:235: starting container process caused “process_linux.go:258: applying cgroup configuration for process caused “Cannot set property TasksAccounting, or unknown property.””
问题原因:主要原因还是centos系统版本兼容性问题,如果将系统做更新升级,即可解决。
执行:yum update后在执行以下操作:

[root@web03 .kube]# yum remove -y kubectl.x86_64 kubeadm.x86_64 kubelet.x86_64
Loaded plugins: fastestmirror, langpacks, product-id, search-disabled-repos, subscription-manager
...
Complete!
[root@web03 .kube]# yum list installed |grep kube
cri-tools.x86_64                      1.13.0-0                       @kubernetes
kubectl.x86_64                        1.13.4-0                       @kubernetes
[root@web03 .kube]# yum install -y kubelet-1.13.4-0.x86_64
Loaded plugins: fastestmirror, langpacks, product-id, search-disabled-repos, subscription-manager
...
Complete!
[root@web03 .kube]# yum install -y kubeadm-1.13.4-0.x86_64
Loaded plugins: fastestmirror, langpacks, product-id, search-disabled-repos, subscription-manager
...
Complete!
[root@web03 .kube]# yum list installed |grep kube
cri-tools.x86_64                      1.13.0-0                       @kubernetes
kubeadm.x86_64                        1.13.4-0                       @kubernetes
kubectl.x86_64                        1.13.4-0                       @kubernetes
kubelet.x86_64                        1.13.4-0                       @kubernetes
kubernetes-cni.x86_64                 0.6.0-0                        @kubernetes

十二、k8s使用kube-router网络插件并监控流量状态
https://www.jianshu.com/p/1a3caecc3b6b

附:kubeadm-kuberouter-all-features.yaml

apiVersion: v1
kind: ConfigMap
metadata:name: kube-router-cfgnamespace: kube-systemlabels:tier: nodek8s-app: kube-router
data:cni-conf.json: |{"cniVersion":"0.3.0","name":"mynet","plugins":[{"name":"kubernetes","type":"bridge","bridge":"kube-bridge","isDefaultGateway":true,"ipam":{"type":"host-local"}}]}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:labels:k8s-app: kube-routertier: nodename: kube-routernamespace: kube-system
spec:template:metadata:labels:k8s-app: kube-routertier: nodeannotations:scheduler.alpha.kubernetes.io/critical-pod: ''spec:serviceAccountName: kube-routerserviceAccount: kube-routercontainers:- name: kube-routerimage: docker.io/cloudnativelabs/kube-routerimagePullPolicy: IfNotPresentargs:- --run-router=true- --run-firewall=true- --run-service-proxy=true- --kubeconfig=/var/lib/kube-router/kubeconfigenv:- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: KUBE_ROUTER_CNI_CONF_FILEvalue: /etc/cni/net.d/10-kuberouter.conflistlivenessProbe:httpGet:path: /healthzport: 20244initialDelaySeconds: 10periodSeconds: 3resources:requests:cpu: 250mmemory: 250MisecurityContext:privileged: truevolumeMounts:- name: lib-modulesmountPath: /lib/modulesreadOnly: true- name: cni-conf-dirmountPath: /etc/cni/net.d- name: kubeconfigmountPath: /var/lib/kube-routerreadOnly: trueinitContainers:- name: install-cniimage: busyboximagePullPolicy: Alwayscommand:- /bin/sh- -c- set -e -x;if [ ! -f /etc/cni/net.d/10-kuberouter.conflist ]; thenif [ -f /etc/cni/net.d/*.conf ]; thenrm -f /etc/cni/net.d/*.conf;fi;TMP=/etc/cni/net.d/.tmp-kuberouter-cfg;cp /etc/kube-router/cni-conf.json ${TMP};mv ${TMP} /etc/cni/net.d/10-kuberouter.conflist;fivolumeMounts:- name: cni-conf-dirmountPath: /etc/cni/net.d- name: kube-router-cfgmountPath: /etc/kube-routerhostNetwork: truetolerations:- key: CriticalAddonsOnlyoperator: Exists- effect: NoSchedulekey: node-role.kubernetes.io/masteroperator: Exists- effect: NoSchedulekey: node.kubernetes.io/not-readyoperator: Existsvolumes:- name: lib-moduleshostPath:path: /lib/modules- name: cni-conf-dirhostPath:path: /etc/cni/net.d- name: kube-router-cfgconfigMap:name: kube-router-cfg- name: kubeconfigconfigMap:name: kube-proxyitems:- key: kubeconfig.confpath: kubeconfig
---
apiVersion: v1
kind: ServiceAccount
metadata:name: kube-routernamespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: kube-routernamespace: kube-system
rules:- apiGroups:- ""resources:- namespaces- pods- services- nodes- endpointsverbs:- list- get- watch- apiGroups:- "networking.k8s.io"resources:- networkpoliciesverbs:- list- get- watch- apiGroups:- extensionsresources:- networkpoliciesverbs:- get- list- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: kube-router
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kube-router
subjects:
- kind: ServiceAccountname: kube-routernamespace: kube-system

十三、k8s小知识点:如何安装指定版本的kubeadm
https://www.jianshu.com/p/4b22b5d2f69b

-------------------20200523 update-------------------

文献:

    1.kubernetes---CentOS7安装kubernetes1.11.2图文完整版

    2.安装k8s 1.9.0 实践:问题集锦


http://www.ppmy.cn/news/174844.html

相关文章

K8S的Kafka监控(Prometheus+Grafana)

对于部署在K8S上的Kafka来说,PrometheusGrafana是常用的监控方案,今天就来实战通过PrometheusGrafana监控K8S环境的Kafka; 准备工作 今天聚焦的是Kafka监控,因此需要K8S、Helm、Kafka、Prometheus、Grafana等服务都已就绪&#…

Kubernetes K8S之存储PV-PVC详解

K8S之存储PV-PVC概述与说明,并详解常用PV-PVC示例 概述 与管理计算实例相比,管理存储是一个明显的问题。PersistentVolume子系统为用户和管理员提供了一个API,该API从如何使用存储中抽象出如何提供存储的详细信息。为此,我们引入…

使用sealos快速搭建 k8s集群

k8s 集群搭建 环境信息,并且配置服务器互信 主机ipmaster1192.168.0.2master2192.168.0.3master3192.168.0.4node0192.168.0.5 注意事项 服务器之间必须配置互信,或者使用统一密码(建议互信的方式) 必须同步所有服务器时间 所…

K8S 安全认证

1. 安全机制 1.1 认证 (Authentication) 1.1.1 认证方式 HTTP Token:Authorization: Bearer $TOKEN HTTP Basic: Authorization: Basic $(base64encode USERNAME:PASSWORD), HTTPS: 基于CA根证书签名的客户端身份认证方式(推荐&#xff0…

S8254电源管理芯片

现在很多电子产品越来越多地使用锂电池供电,而一节锂电池的电压在3.2V-4.2V之间,如果需要更高的电压怎么办,一般就需要把电池串接得到更高的电压,但是锂电池又需要合理保护电路才能保证其正常工作。采用精工电子三/四节串联锂离子…

k8s集群环境namespace共享和隔离

k8s集群环境namespace共享和隔离 不同的工作组可以在同一个kubernets集群中工作,使用命名空间和context来对不同的工作组进行划分,使其在同一个集群工作又互不干扰 namespace共享和隔离,适用于人员多,人事架构系统复杂,一遍情况下,使用标签就够公司使用 1、创建命名空间namesp…

k8s之Namespace详解

Namespace 隔离资源 默认情况下,kubernetes集群中的所有的Pod都是可以相互访问的。但是在实际中,可能不想让两个Pod之间进行互相的访问,那此时就可以将两个Pod划分到不同的namespace下。kubernetes通过将集群内部的资源分配到不同的Namespac…

k8s集群搭建

本文为搭建k8s集群笔记&#xff0c;详细搭建教程等有时间整理 使用阿里云镜像 cat < /etc/yum.repos.d/kubernetes.repo [kubernetes] nameKubernetes baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled1 gpgcheck1 repo_gpgcheck1 gp…