文章目录
- 一、RBAC授权
- RBAC概念
- 考题
- 1.切换指定集群
- 2.创建集群角色
- 3.创建一个服务账号
- 4.账号与角色绑定
- 5.测试
- 二、节点维护--指定node节点不可用
- 考题
- 1. 先切换集群
- 2. 先将ek8s-node01节点设置为不可调度状态
- 3. 驱逐ek8s-node01节点上的Pod
- 测试
- 三、K8s版本升级
- 考题
- 解题思路
- 1.确定要升级到哪个版本
- 2.升级控制平面节点
- 1.升级 kubeadm
- 2.验证下载操作正常,并且 kubeadm 版本正确
- 3.验证升级计划
- 4..选择要升级到的目标版本,运行合适的命令,要设置不升级etcd
- 3.腾空节点
- 4.升级 kubelet 和 kubectl
- 1.升级 kubelet 和 kubectl
- 2.重启 kubelet:
- 4.解除节点的保护
- 四、etcd备份与还原
- 考题
- 帮助
- 1.备份
- 2.恢复
- 查看备份文件大小
- 五、网络策略--NetworkPolicy
- 题型1
- 示例:cka-05-1.yaml
- 题型2
- 获取big-corp的label
- 示例 cka-05-2.yml
- 题型3
- 示例:cka-05-3.yml
- 六、SVC暴露应用
- 考题
- 帮助
- 七、Ingress七层代理
- 考题
- 八、Deployment扩容pod
- 考题
- 解题方式1
- 解题方式2
- 九、nodeSelector--调度Pod到指定节点
- 考题
- 十、检查Node节点的健康状态
- 考题
- 十一、创建多容器Pod
- 考题
- 十二、创建PV
- 考题
- 十三、创建PVC
- 考题
- 十四、获取Pod错误日志
- 考题
- 变题型
- 十五、Sidecar代理日志
- 考题
- 做题分析:
- 解题步骤
- 1.切换集群。
- 2.导出现有的iegacy-app的 pod,再删除iegacy-app
- 3.修改legacy-app.yaml文件。
- 十六、统计使用 CPU 最高的 Pod
- kubectl top pod -h
- 十七、节点NotReady处理
- 考题
- 思路分析
- 解题步骤
一、RBAC授权
RBAC概念
Kubernetes 的 RBAC(Role-Based Access Control)是一种用于管理用户和服务账户权限的机制。它允许用户通过角色和角色绑定的方式来控制对 Kubernetes 资源的访问。
RBAC组件1.角色(Role)定义了一组权限,指定了哪些资源可以被访问以及可以进行哪些操作。角色是基于命名空间的,意味着每个命名空间可以有不同的角色定义。2.集群角色(ClusterRole)类似于Role,但不局限于某个命名空间,可以在集群级别应用。常用于赋予对所有命名空间中的资源的访问权限,或对集群级别的资源(如节点、持久卷等)进行权限控制。3.角色绑定(RoleBinding)将Role(或 ClusterRole)绑定到一个用户、组或服务账户,使其获得角色定义的权限。角色绑定是基于命名空间的。4.集群角色绑定(ClusterRoleBinding)将ClusterRole绑定到一个用户、组或服务账户,使其在整个集群中获得权限。
RBAC 采用以下三个核心概念来描述权限:1.资源(Resources):Kubernetes 中的资源类型,如 pods、deployments、services 等。2.动词(Verbs):对资源进行的操作,如 get、list、create、update、delete、watch 等。3.资源名称(Resource Names):特定资源的名称,可以选择性地限制操作某些特定实例。
考题
Context
You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a specific namespace.
Task
Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types:
·Deployment
·StatefulSet
·DaemonSet
Create a new ServiceAccount named cicd-token in the existing namespace app-team1 .
Bind the new ClusterRole deployment-clusterrole to the ServiceAccount cicd-token ,limited to the namespace app-team1.
参考链接:https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/rbac/
1.切换指定集群
kubectl config use-context 【集群名称】
2.创建集群角色
创建集群角色,名为deployment-clusterrole,该角色具备创建权限,资源对象为deployments、daemonsets、statefulsets。
kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployment,daemonsets,statefulsets
3.创建一个服务账号
在app-team1命名空间里创建一个服务账号,名为cicd-token。
kubectl create ns app-team1
kubectl create serviceaccount cicd-token -n app-team1
4.账号与角色绑定
账号与角色绑定,名为text,将app-team1命名空间下的cicd-token账号和deployment-clusterrole集群角色绑定。
kubectl create rolebinding cicd-token --serviceaccount=app-team1:cicd-token --clusterrole=deployment-clusterrole -n app-team1
5.测试
测试只能创建deploymen等资源,因为本身就没有权限查看pod,所以查看会失败。
kubectl --as=system:serviceaccount:app-team1:cicd-token get pods -n app-team1
kubectl --as=system:serviceaccount:app-team1:cicd-token create deployment nginx --image=nginx -n app-team1
我们有创建的权限
二、节点维护–指定node节点不可用
官网参考文档:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/safely-drain-node/
考题
1. 先切换集群
先切换集群: 考试中有多个集群, 基本每道题对应⼀个集群, 做题前⼀定要先切换集群。
kubectl config use-context ek8s
# 练习环境都是在⼀个集群中, 就不⽤切换了。
2. 先将ek8s-node01节点设置为不可调度状态
kubectl cordon ek8s-node01
3. 驱逐ek8s-node01节点上的Pod
kubectl drain ek8s-node01 --delete-emptydir-data --ignore-daemonsets --force
#题⽬完成
注:如果执行drain提示错误,根据提示再加上选项,例如–delete-local-data --force
可以使用help,查看drain命令的相关选项参数
kubectl drain -h
root@k8s-master1:~# kubectl drain -h
Drain node in preparation for maintenance.'drain' waits for graceful termination. You should not operate on the machine until the command completes.When you are ready to put the node back into service, use kubectl uncordon, which will make the node schedulable again.https://kubernetes.io/images/docs/kubectl_drain.svg Workflowhttps://kubernetes.io/images/docs/kubectl_drain.svgExamples:# Drain node "foo", even if there are pods not managed by a replication controller, replica set, job, daemon set or
stateful set on itkubectl drain foo --force# As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or
stateful set, and use a grace period of 15 minuteskubectl drain foo --grace-period=900Options:--delete-emptydir-data=false:Continue even if there are pods using emptyDir (local data that will be deleted when the node is drained).--force=false:Continue even if there are pods that do not declare a controller.--ignore-daemonsets=false:Ignore DaemonSet-managed pods.--pod-selector='':Label selector to filter pods on the node--timeout=0s:The length of time to wait before giving up, zero means infiniteUsage:kubectl drain NODE [options]Use "kubectl options" for a list of global command-line options (applies to all commands).
测试
#这⼀段考试中不⽤执⾏
kuebctl get pod #没有pod了
kubectl uncordon ek8s-node01 #恢复调度
kubectl get node
三、K8s版本升级
官网参考文档:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
考题
解题思路
升级流程是标准的, 直接复制官网案例就可以。
认真读题,确认要升级到哪个节点。
#查找系统中有⽆要升级的指定版本(注意版本差异, 考试时可能升级其他版本)
apt-cache madison kubeadm | grep '1.23.'#先清空要升级的节点, node节点升级时也要清空
#驱逐节点上的Pod,并设置为维护状态
kubectl drain ek8s-master01 --delete-empty dir-data --ignore-daemonsets --force#ssh连接到要升级的节点(模拟环境操作ek8s-master01即可)
apt update#升级kubeadm
apt install kubeadm=1.23.0-00#使⽤kubeadm验证升级计划
kubeadm upgrade plan #升级主节点⽤这个命令, 如果升级从节点参考官⽅⽂档#复制命令, 设置不升级etcd
kubeadm upgrade apply v1.23.0 --etcd-upgrade=false -f
#升级计划⾥⾯没有提示1.23.0, 没关系直接升级即可。这⾥如果升级跨的版本太⼤⽐如
#1.22.0--->1.23.17可能会出现升级后版本号不更新的问题, 可以升级到中间的版本再向上升级, 不要⼀次跨太多版本。
# 升级 kubectl 和 kubelet
apt-mark unhold kubelet kubectl
apt install kubelet=1.23.0-00 kubectl=1.23.0-00
apt-mark hold kubelet kubectl#重启kubelet
systemctl daemon-reload
systemctl restart kubelet#检查版本
kubectl get node
我目前k8s的版本是1.27.4
1.确定要升级到哪个版本
使用操作系统的包管理器找到最新的补丁版本
# ubuntu系统
# 在列表中查找最新的 1.31 版本
# 它看起来应该是 1.31.x-*,其中 x 是最新的补丁版本
sudo apt update
sudo apt-cache madison kubeadm
2.升级控制平面节点
1.升级 kubeadm
# 用最新的补丁版本号替换 1.31.x-* 中的 x
sudo apt-mark unhold kubeadm && \
sudo apt-get update && sudo apt-get install -y kubeadm='1.28.2-00' && \
sudo apt-mark hold kubeadm
2.验证下载操作正常,并且 kubeadm 版本正确
kubeadm version
3.验证升级计划
sudo kubeadm upgrade plan
此命令检查你的集群是否可被升级,并取回你要升级的目标版本。 命令也会显示一个包含组件配置版本状态的表格。
root@k8s-master1:~# sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.27.0
[upgrade/versions] kubeadm version: v1.28.2
I1106 19:01:41.405382 94653 version.go:256] remote version is much newer: v1.31.2; falling back to: stable-1.28
[upgrade/versions] Target version: v1.28.15
[upgrade/versions] Latest version in the v1.27 series: v1.27.16Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.27.4 v1.27.16Upgrade to the latest version in the v1.27 series:COMPONENT CURRENT TARGET
kube-apiserver v1.27.0 v1.27.16
kube-controller-manager v1.27.0 v1.27.16
kube-scheduler v1.27.0 v1.27.16
kube-proxy v1.27.0 v1.27.16
CoreDNS v1.10.1 v1.10.1
etcd 3.5.7-0 3.5.9-0You can now apply the upgrade by executing the following command:kubeadm upgrade apply v1.27.16_____________________________________________________________________Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.27.4 v1.28.15Upgrade to the latest stable version:COMPONENT CURRENT TARGET
kube-apiserver v1.27.0 v1.28.15
kube-controller-manager v1.27.0 v1.28.15
kube-scheduler v1.27.0 v1.28.15
kube-proxy v1.27.0 v1.28.15
CoreDNS v1.10.1 v1.10.1
etcd 3.5.7-0 3.5.9-0You can now apply the upgrade by executing the following command:kubeadm upgrade apply v1.28.15Note: Before you can perform this upgrade, you have to update kubeadm to v1.28.15._____________________________________________________________________The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
4…选择要升级到的目标版本,运行合适的命令,要设置不升级etcd
kubeadm upgrade apply v1.28.2 --etcd-upgrade=false -f
执行kubeadm upgrade apply v1.28.15之前需要设置不升级etcd,不然etcd也给你升级了,数据会有丢失的风险。
这里直接复制执行系统提供的"kubeadm upgrade apply v1.28.15"计划升级版本会有错,执行后根据下文的提示,修改版本为v1.28.2
命令结束后,会看到SUCCESS
查看kubeadm version可以看到升级成功了
3.腾空节点
将节点标记为不可调度并驱逐所有负载,准备节点的维护:驱逐节点上的Pod,并设置为维护状态
# 将 <node-to-drain> 替换为你要腾空的控制面节点名称
kubectl drain <node-to-drain> --ignore-daemonsets
# 例如:kubectl drain mk8s-master-0 --ignore-daemonsets
4.升级 kubelet 和 kubectl
1.升级 kubelet 和 kubectl
sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && sudo apt-get install -y kubelet='1.28.2-00' kubectl='1.28.2-00' && \
sudo apt-mark hold kubelet kubectl
2.重启 kubelet:
sudo systemctl daemon-reload
sudo systemctl restart kubelet
4.解除节点的保护
通过将节点标记为可调度,让其重新上线
# 将 <node-to-uncordon> 替换为你的节点名称
kubectl uncordon <node-to-uncordon>
四、etcd备份与还原
官方参考文档:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster
考题
No configuration context change required for this item.
Task
First, create a snapshot of the existing etcd instance running at https://127.0.0.1:2379
,saving the snapshot to /srv/data/etcd-snapshot.db
.
Creating a snapshot of the given instance is expected to complete in seconds.
lf the operation seems to hang,something’s likely wrong with your command.Use CTRL + C
to cancel the operation and try again.
Next, restore an existing, previous snapshot located at /var/lib/backup/etcd-snapshot-previous.db
The following TLS certificates/key are supplied for connecting to the server with etcdctl
:
·CA certificate: /opt/KUINOO601/ca.crt
·Client certificate: /opt/KUINO0601/etcd-client.crt
·Client key: /opt/KUINOO601/etcd-client.key
考试时本题需要特别注意, 127.0.0.1是个回环地址, 所以你要看仔细题⽬要求你在在哪台机器上面备份, 在哪台机器上⾯恢复。
1 #考试时先ssh登录到题⽬中指定的备份节点
2 #备份
3 export ETCDCTL_API=3
4 etcdctl --endpoints="https://127.0.0.1:2379" \-- cacert=/opt/KUIN00601/ca.crt \-- cert=/opt/KUIN00601/etcd-client.crt \-- key=/opt/KUIN00601/etcd-client.key snapshot save /srv/data/etcd-snapshot.db5 # 如果命令报错 Error: context deadline exceeded
6 # 这是证书过期了, 执⾏下
7 # cp /etc/kubernetes/pki/etcd/peer.crt /opt/KUIN00601/etcd-client.crt
8 # cp /etc/kubernetes/pki/etcd/peer.key /opt/KUIN00601/etcd-client.key9 #确认下备份⽂件
10 etcdctl snapshot status /srv/data/etcd-snapshot.db -w table
11 # etcd检查已备份的快照不需要连接etcd, 仅仅是etcdctl完成,可以不写 --endpoints --cacert --cert --key
12 # 下⾯恢复数据的部分etcdctl snapshot restore亦是同理
练习时, 这里可以先做⼀个标记, 比如 kubectl create ns etcdtest 创建⼀个ns, 等下面恢复完成后, 在检查这个ns是否存在, 如果不存在即恢复成功, 如果依然存在则恢复失败。
这只是本地练习时的验证,考试时不用这么做。
帮助
etcdctl -h
root@k8s-master1:~# etcdctl -h
NAME:etcdctl - A simple command line client for etcd3.USAGE:etcdctl [flags]VERSION:3.5.1API VERSION:3.5COMMANDS:snapshot restore Restores an etcd member snapshot to an etcd directorysnapshot save Stores an etcd node backend snapshot to a given filesnapshot status [deprecated] Gets backend snapshot status of a given fileOPTIONS:--cacert="" verify certificates of TLS-enabled secure servers using this CA bundle--cert="" identify secure client using this TLS certificate file--endpoints=[127.0.0.1:2379] gRPC endpoints--key="" identify secure client using this TLS key file
1.备份
ETCDCTL_API=3 etcdctl snapshot save /data/backup/etcd-snapshot.db \
--endpoints=https://127.0.0.1:2379 \
--cacert=/opt/KUINO0601/ca.crt \
--cert=/opt/KUINO0601/etcd-client.crt \
--key=/opt/KUIN00601/etcd-client.key \
2.恢复
systemctl stop etcd
systemctl cat etcd #确认下数据目录mv /var/lib/etcd/default.etcd /var/lib/etcd/default.etcd.bak
ETCDCTL_API=3 etcdctl snapshot restore /data/backup/etcd-snapshot-previous.db
--data-dir=/var/lib/etcd/default.etcd
chown -R etcd:etcd /var/lib/etcdsystemctl start etcd
查看备份文件大小
五、网络策略–NetworkPolicy
官方参考文档:https://kubernetes.io/zh-cn/docs/concepts/services-networking/network-policies/
网络策略这一道题可能存在多种题型的变换
题型1
示例:cka-05-1.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:name: allow-port-from-namespacenamespace: internal
spec:podSelector: {}policyTypes:- Ingressingress:- from:- podSelector: {}ports:- protocol: TCPport: 9000
kubectl create -f cka-05-1.yaml
题型2
获取big-corp的label
kubectl get ns big-corp --show-labels
示例 cka-05-2.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:name: allow-port-from-namespacenamespace: my-app
spec:ingress:- from:- podSelector: {}ports:- port: 8080protocol: TCPegress:- to:- namespaceSelector:matchLabels:kubernetes.io/metadata.name: big-corpports:- protocol: TCPport: 8080podSelector: {}policyTypes:- Ingress- Egress
也有可能是其它端口,比如9000, 到时替换下就行。
kubectl apply -f cka-05-2.yml
题型3
和前两题相比, 题型3比较简单, 只允许internal命名空间即可。
示例:cka-05-3.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:name: allow-port-from-namespacenamespace: big-corp
spec:ingress:- from:- namespaceSelector:matchLabels:kubernetes.io/metadata.name: internalports:- port: 9200protocol: TCPpodSelector: {}policyTypes:- Ingress
kubectl create -f cka-05-3.yml
六、SVC暴露应用
官网参考文档:https://kubernetes.io/zh-cn/docs/tutorials/services/connect-applications-service/
考题
切换集群环境
kubectl config use-context k8s
修改deployment配置文件,添加暴露端口。考试中front-end已存在。
kubectl edit deploy front-end
添加以下配置,主要是在name为nginx的容器下
....
spec:containers:- image: nginximagePullPolicy: Alwaysname: nginxports: ##添加此行- containerPort: 80 ##添加此行name: http ##添加此行protocol: TCP ##添加此行
创建service服务
kubectl expose deployment front-end --port=80 --target-port=80 --type=NodePort --name=front-end-svc
帮助
kubectl expose -h
root@k8s-master1:~# kubectl expose -h
Expose a resource as a new Kubernetes service.Possible resources include (case insensitive):pod (po), service (svc), replicationcontroller (rc), deployment (deploy), replicaset (rs)Examples:# Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000kubectl expose rc nginx --port=80 --target-port=8000# Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on
port 8000kubectl expose rs nginx --port=80 --target-port=8000# Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000kubectl expose deployment nginx --port=80 --target-port=8000Options:--name='':The name for the newly created object.--port='':The port that the service should serve on. Copied from the resource being exposed, if unspecified--target-port='':Name or number for the port on the container that the service should direct traffic to. Optional.--type='':Type for this service: ClusterIP, NodePort, LoadBalancer, or ExternalName. Default is 'ClusterIP'.Usage:kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name]
[--name=name] [--external-ip=external-ip-of-service] [--type=type] [options]Use "kubectl options" for a list of global command-line options (applies to all commands).
七、Ingress七层代理
官方参考文档:https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/
考题
记得切换环境
kubectl config use-context k8s
vim cka-08.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: pongnamespace: ing-internal
spec:# ⽤哪种类型的控制器管理Ingress,使用kubectl get ingressclass查看# 考试⼀定要写这个, 根据实际情况来, 不然会扣分ingressClassName: nginxrules:- http:paths:- path: /hipathType: Prefixbackend:service:name: hiport:number: 5678
kubectl create -f cka-08.yaml
需要注意ingressClassName字段, 一般都会是nginx, 考试时可以先用kubectl get ingressclass看下是不是nginx
这道题可以curl -kL ingress-controller_IP/hi 检查下
kubectl get ingress -n ing-internal
curl -kL <获取ingress的ip地址>/hi
八、Deployment扩容pod
考题
解题方式1
命令格式
kubectl scale <资源类型>/<资源名称> --replicas=<副本数> -n <命名空间>
示例操作
假设你有一个 Deployment 资源名为 my-deployment,在命名空间 my-namespace 中,并希望将其副本数调整为 5,可以使用以下命令:
kubectl scale deployment/my-deployment --replicas=5 -n my-namespace
#扩容到6个副本, 考试是可能会指定namespace 就⽤-n 指定; 或者副本数量不是6, 名称有变化等, 这些对号⼊座就可以。kubectl scale --replicas=6 deployment loadbalancer#检查下, 只关注副本数量, 有没有起来不⽤管。kubectl get deployments.apps loadbalancer
帮助命令
kubectl scale --help
解题方式2
直接编辑loadbalancer,将replicas参数改为6
kubectl edit deployment loadbalancer
九、nodeSelector–调度Pod到指定节点
官方参考文档:https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/
考题
vim cka09.ymal
apiVersion: v1
kind: Pod
metadata:name: nginx-kusc00401
spec:nodeSelector:disk: spinningcontainers:- name: nginximage: nginx
kubectl create -f cka09.ymal
kubectl get pod nginx-kusc00401 -owide
十、检查Node节点的健康状态
考题
kubectl describe node $(kubectl get nodes |grep Ready |awk '{print $1}') |grep Taints |grep -vc NoSchedule
十一、创建多容器Pod
官方参考文档:https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/
官网搜索关键词 Pod
考题
vim cka-11-multiPod.yml
apiVersion: v1
kind: Pod
metadata:name: kucc4
spec:containers:- name: nginx image: nginx - name: redis image: redis - name: memcached image: memcached - name: consulimage: consul
四个容器中有一个consul镜像拉取失败,考试时应该不会。
可以看到,kucc4这个pod中的4个容器有一容器镜像拉取失败,那么这个pod的status就会是ImagePullBackOff。
十二、创建PV
官方参考文档:https://kubernetes.io/zh-cn/docs/concepts/storage/persistent-volumes/
关键词搜索 persistent-volumes
考题
vim cka12-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:name: app-configlabels:type: local
spec:storageClassName: manualcapacity:storage: 2GiaccessModes:- ReadWriteManyhostPath:path: "/srv/app-config"
kubectl apply -f cka12-pv.yaml
kubectl get -f cka12-pv.yaml
十三、创建PVC
官网参考文档:https://kubernetes.io/zh-cn/docs/concepts/storage/persistent-volumes/
搜索关键词:PersistentVolumeClaims
考题
vim cka13-1-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pv-volume
spec:accessModes:- ReadWriteOnce
resources:requests:storage: 10MistorageClassName: csi-hostpath-sc
kubectl apply -f cka13-1-pvc.yaml
kubectl get -f cka13-1-pvc.yaml
vim cka13-2-pod.yaml
apiVersion: v1
kind: Pod
metadata:name: web-server ##修改
spec:containers:- name: web-server ##修改image: nginx ##修改ports:volumeMounts:- mountPath: "/usr/share/nginx/html" ##修改name: mypdvolumes:- name: mypdpersistentVolumeClaim:claimName: pv-volume ##修改
student@ek8s-master01:~$ kubectl apply -f cka-13-2.yaml
pod/web-server createdstudent@ek8s-master01:~$ kubectl get -f cka-13-2.yaml
NAME READY STATUS RESTARTS AGE
web-server 1/1 Running 0 7s#已经挂载上了
student@ek8s-master01:~$ kubectl describe pod web-server | grep pv-volume/usr/share/nginx/html from pv-volume (rw)pv-volume:ClaimName: pv-volume
十四、获取Pod错误日志
官方参考文档:https://kubernetes.io/zh-cn/docs/reference/kubectl/
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs
考题
# 先切换集群: 考试中有多个集群, 基本每道题对应⼀个集群, 做题前⼀定要先切换集群。
# 练习环境都是在⼀个集群中, 不⽤切换了。
kubectl config use-context k8s
kubectl logs foobar | grep unable-access-website >/opt/KUTR00101/foobar
#再查看⽂件检查下
cat /opt/KUTR00101/foobar
变题型
kubectl config use-context k8skubectl logs foobar -c container01 | grep unable-access-website >/opt/KUTR00101/foobar#再查看⽂件检查下
cat /opt/KUTR00101/foobar
十五、Sidecar代理日志
官网参考文档:https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/logging/#sidecar-container-with-logging-agent
考题
做题分析:
做题分析:1.切换集群。2.导出iegacy-app pod的yaml文件,并删除此pod。3.修改导出来的yaml文件,添加内容。4.导入yaml。
解题步骤
1.切换集群。
kubectl config use-context k8s
2.导出现有的iegacy-app的 pod,再删除iegacy-app
kubectl get pod legacy-app -o yaml > legacy-app.yaml # 导出后修改
kubectl delete pod legacy-app
kubectl apply -f legacy-app.yaml
3.修改legacy-app.yaml文件。
vim legacy-app.yaml
apiVersion: v1
kind: Pod
metadata:name: legacy-app
spec:containers:- name: countimage: busyboxargs:- /bin/sh- -c- >i=0;while true;doecho "$i: $(date)" >> /var/log/legacy-app.log;sleep 1;donevolumeMounts: ##添加- name: varlog ##添加mountPath: /var/log ##添加- name: sidecar ##添加image: busybox ##添加args: [/bin/sh, -c, 'tail -n+1 -f /var/log/legacy-app.log'] ##添加volumeMounts: ##添加- name: varlog ##添加mountPath: /var/log ##添加
......volumes: # volumes 块在导出的 yaml 下面已经有了,在已有的添加下面两行即可- name: varlog ##添加emptyDir: {} ##添加
注:由于Pod不能在线增加容器,可先导出yaml再添加,最后apply
十六、统计使用 CPU 最高的 Pod
kubectl config use-context k8s
kubectl top pod -l name=cpu-utilizer --sort-by="cpu" -A# 然后将pod名称写到文件
echo "<podname>" > /opt/KUTR00401/KUTR00401.txt
kubectl top pod -h
帮助信息
kubectl top pod -h
root@k8s-master1:~# kubectl top pod -h
Display resource (CPU/memory) usage of pods.The 'top pod' command allows you to see the resource consumption of pods.Due to the metrics pipeline delay, they may be unavailable for a few minutes since pod creation.Aliases:
pod, pods, poExamples:# Show metrics for all pods in the default namespacekubectl top pod# Show metrics for all pods in the given namespacekubectl top pod --namespace=NAMESPACE# Show metrics for a given pod and its containerskubectl top pod POD_NAME --containers# Show metrics for the pods defined by label name=myLabelkubectl top pod -l name=myLabelOptions:-A, --all-namespaces=false:If present, list the requested object(s) across all namespaces. Namespace in current context is ignored evenif specified with --namespace.--containers=false:If present, print usage of containers within a pod.-l, --selector='':Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matchingobjects must satisfy all of the specified label constraints.--sort-by='':If non-empty, sort pods list using specified field. The field can be either 'cpu' or 'memory'.Usage:kubectl top pod [NAME | -l label] [options]Use "kubectl options" for a list of global command-line options (applies to all commands).
十七、节点NotReady处理
考题
思路分析
解题步骤
1、切换集群。
kubectl config use-context wk8s
2、查看所有节点状态。
kubectl get node
3、找到有问题的节点,ssh过去,并切换root用户。
ssh wk8s-node-0sudo -i
4、查看kubelet服务状态,启动,并设置开机自启。
systemctl status kubelet
systemctl start kubelet
systemctl enable kubelet