以下组件基于1.19版本
可以使用kubectl api-resource
name | short name | API group | namespace | kind |
---|---|---|---|---|
bingds | true | bing | ||
componentStatuses | cs | false | ComponetStatus | |
configMaps | cm | true | ConfigMap | |
endpoints | ep | true | Endpoints | |
events | ev | true | Event | |
limitranges | limits | true | LimitRange | |
namepspaces | ns | false | Namespace | |
nodes | no | false | Node | |
persistentvolumeclaims | pvc | true | PersistentVolumeClaim | |
persistentvolumes | pv | false | PersistentVolume | |
podds | po | true | Pod | |
podtemplates | true | PodTemplate | ||
replicationcontrollers | rc | true | ReplicationController | |
resourcequotas | quota | true | ResourceQuota | |
secrets | true | Secret | ||
serviceaccounts | sa | true | ServiceAccount | |
services | svc | true | Service | |
mutatingwebhookconfigurations | adminssionregistration.k8s.io | false | MutatingWebHookConfiguration | |
validatingwebhookconfigurations | adminssionregistration.k8s.io | false | ValidatingWebhookConfiguration | |
customresourcedefinitions | apiextensions.k8s.io | false | CustomResourceDefinition | |
apiservices | apiregistration.k8s.io | false | APIService | |
controllerrevisions | apps | true | ControllerRevision | |
deamonsets | ds | apps | true | DeamonSet |
deployments | deploy | apps | true | Deployment |
replicasets | rs | apps | true | ReplicaSet |
statefulsets | sts | apps | true | StatefulSet |
tokenreviews | authentication.k8s.io | false | TokenReview | |
localsubjectaccessreviews | authentication.k8s.io | true | LocalSubjectAccessReview | |
selsubjectaccessreviews | authentication.k8s.io | false | SelfSubjectAccessReview | |
subjectaccessreviews | authentication.k8s.io | false | SubjectAccessReview | |
horizontalpodauscalers | hpa | .autoscaling | true | HorizontalPodAutoscaler |
cronjobs | cj | batch | true | Cronjob |
jobs | batch | true | Job | |
certificatessigningrequests | csr | certificates.k8s.io | true | CertificateSigningRequest |
lease | coordination.k8s.io | true | Lease | |
endpointslices | discovery.k8s.io | true | EndpointSlice | |
events | ev | events.k8s.io | true | Event |
flowschemas | flowcontrol.apiserver.k8s.io | false | FlowSchema | |
prioritylevelconfigurations | flowcontrol.apiserver.k8s.io | false | PriorityLevelConfiguration | |
ingressclasses | networking.k8s.io | false | IngressClass | |
ingresses | ing | networking.k8s.io | true | Ingress |
networkpolicies | netpol | networking.k8s.io | true | NetworkPolicy |
runtimeclasses | node.k8s.io | false | RuntimeClass | |
poddisruptionbugets | pdb | policy | true | PodDisruptionBudget |
podsecuritypolicies | psp | policy | false | PodSecurityPolicy |
clusterrolebindings | rbac.authorization.k8s.io | false | ClusterRoleBinding | |
clusterroles | rbac.authorization.k8s.io | false | ClusterRole | |
rolebindings | rbac.authorization.k8s.io | true | RoleBinding | |
roles | rbac.authorization.k8s.io | true | Role | |
priorityclasses | pc | scheduling.k8s.io | false | PriorityClass |
csidrivers | storage.k8s.io | false | CSIDriver | |
csinodes | storage.k8s.io | false | CSINode | |
storageclasses | sc | storage.k8s.io | false | StoragetClass |
volumeattachment | storage.k8s.io | false | VolumeAttachment |
我们可以看下一个刚刚安装的集群包含了哪些组件
~# kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/calico-kube-controllers-8b5ff5d58-szbwh 1/1 Running 3 51d
kube-system pod/calico-node-m2cfp 1/1 Running 1 51d
kube-system pod/calico-node-pwgkr 1/1 Running 1 51d
kube-system pod/calico-node-xbxnq 1/1 Running 2 51d
kube-system pod/coredns-85967d65-k2pnp 1/1 Running 1 51d
kube-system pod/coredns-85967d65-zkj27 1/1 Running 1 51d
kube-system pod/dns-autoscaler-5b7b5c9b6f-8w4vh 1/1 Running 1 51d
kube-system pod/kube-apiserver-node1 1/1 Running 1 51d
kube-system pod/kube-apiserver-node2 1/1 Running 1 51d
kube-system pod/kube-controller-manager-node1 1/1 Running 46 51d
kube-system pod/kube-controller-manager-node2 1/1 Running 51 51d
kube-system pod/kube-proxy-4vv6x 1/1 Running 1 51d
kube-system pod/kube-proxy-dc8zm 1/1 Running 1 51d
kube-system pod/kube-proxy-g256z 1/1 Running 2 51d
kube-system pod/kube-scheduler-node1 1/1 Running 44 51d
kube-system pod/kube-scheduler-node2 1/1 Running 48 51d
kube-system pod/nginx-proxy-node3 1/1 Running 2 51d
kube-system pod/nodelocaldns-9469d 1/1 Running 1 51d
kube-system pod/nodelocaldns-kz8f2 1/1 Running 1 51d
kube-system pod/nodelocaldns-ld4n4 1/1 Running 2 51d
kubernetes-dashboard pod/dashboard-metrics-scraper-7b59f7d4df-bm742 1/1 Running 1 49d
kubernetes-dashboard pod/kubernetes-dashboard-997f4979d-xmbfn 1/1 Running 1 49dNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 51d
kube-system service/coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 51d
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.233.54.163 <none> 8000/TCP 49d
kubernetes-dashboard service/kubernetes-dashboard NodePort 10.233.36.214 <none> 443:31241/TCP 49dNAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 3 3 3 3 3 <none> 51d
kube-system daemonset.apps/kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 51d
kube-system daemonset.apps/nodelocaldns 3 3 3 3 3 <none> 51dNAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 51d
kube-system deployment.apps/coredns 2/2 2 2 51d
kube-system deployment.apps/dns-autoscaler 1/1 1 1 51d
kubernetes-dashboard deployment.apps/dashboard-metrics-scraper 1/1 1 1 49d
kubernetes-dashboard deployment.apps/kubernetes-dashboard 1/1 1 1 49dNAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-8b5ff5d58 1 1 1 51d
kube-system replicaset.apps/coredns-85967d65 2 2 2 51d
kube-system replicaset.apps/dns-autoscaler-5b7b5c9b6f 1 1 1 51d
kubernetes-dashboard replicaset.apps/dashboard-metrics-scraper-7b59f7d4df 1 1 1 49d
kubernetes-dashboard replicaset.apps/kubernetes-dashboard-74d688b6bc 0 0 0 49d
kubernetes-dashboard replicaset.apps/kubernetes-dashboard-997f4979d 1 1 1 49d
我们发现上面有好几个组件曾经restart过,我们可以查看下详情看看什么原因呗重启了,我们使用以下命令查看上一次错误原因
kubectl descirbe pod/kube-scheduler-node1
State: RunningStarted: Tue, 06 Apr 2021 02:49:36 +0800Last State: TerminatedReason: ErrorExit Code: 255Started: Tue, 06 Apr 2021 02:04:10 +0800Finished: Tue, 06 Apr 2021 02:49:35 +0800
我们发现除了 Error没其它有用信息了,没关系我们可以查看上一次镜像日志使用-p
参数,执行以下命令查看上一次容器日志
kubectl logs pod/kube-scheduler-node1 -n kube-system -p |grep error
如果显示行信息不够可以增加 grep
-C10
参数查看前后10行
输出如下
:~# kubectl logs pod/kube-scheduler-node1 -n kube-system -p |grep error
E0405 18:44:55.781090 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-scheduler: Get "https://x.x.x.x:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
查询了下原因stackoverflow找到答案如下