93、k8s之hpa+helm

ops/2024/11/12 20:14:31/

一、HPA

HPA:

pod的数量进行扩缩容

针对对控制器创建的pod

deployment:

replicas:

静态:edit

yaml:apply -f

HPA:基于cpu的利用率来实现pod的数量大的自动伸缩。

Horizontal Pod Autoscaling

yaml 文件------主流------>必须要有资源控制的字段才能生效。

命令行 生成HPA的配置。

yaml必要条件:

前置必要条件:控制器创建,而且必须能设置副本数

配置的必要条件:必须要声明pod的资源控制

1、metrices-server hpa使用的依赖环境。k8s使用集群资源的集中查询器。收集资源使用数据,给hpa,scheduller使用

echo 1 > /proc/sys/vm/drop_caches # 释放页面缓存
echo 2 > /proc/sys/vm/drop_caches # 释放目录项和inode缓存
echo 3 > /proc/sys/vm/drop_caches # 释放所有缓存

2、扩容和缩容的速度:

扩容一旦达到阈值,会立即扩容,缩容速度相对较慢。

为了保证pod的正常工作,扩容必须要快。

缩容的时候为了保证pod的资源突然又变大了,可以继续维持pod的数量,在一定时间之内,pod占用的资源维持在较低的比率,然后开始慢慢缩容。

[root@master01 opt]# mkdir hap
[root@master01 opt]# cd /opt/
[root@node01 opt]# rz -E  ##所有节点
rz waiting to receive.
[root@node01 opt]# ls
metrics-server.tar
[root@master01 opt]# docker load  -i metrics-server.tar  ##所有节点
[root@master01 opt]# cd hap/
[root@master01 hap]# rz -E
rz waiting to receive.
[root@master01 hap]# ls
components.yaml
[root@master01 hap]# vim components.yaml 
[root@master01 hap]# kubectl apply -f components.yaml 
[root@master01 hap]# kubectl get pod -n kube-system 
[root@master01 hap]# kubectl top node 
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master01   536m         13%    3405Mi          44%       
node01     64m          1%     2204Mi          28%       
node02     81m          2%     2559Mi          33%   [root@master01 hap]# kubectl explain hpa
KIND:     HorizontalPodAutoscaler
VERSION:  autoscaling/v1[root@master01 hap]# vim hpa.yamlapiVersion: apps/v1
kind: Deployment
metadata:name: hpa-test1labels:hpa: test1
spec:replicas: 1selector:matchLabels:hpa: test1template:metadata:labels:hpa: test1spec:containers:- name: centosimage: centos:7command: ["/bin/bash", "-c", "yum install -y epel-release --nogpgcheck && yum install -y stress --nogpgcheck && sleep 3600"]volumeMounts:- name: yum1mountPath: /etc/yum.repos.d/resources:limits:cpu: "1"memory: 512Mivolumes:- name: yum1hostPath:path: /etc/yum.repos.d/
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:name: hpa-centos1
spec:scaleTargetRef:apiVersion: apps/v1kind: Deploymentname: hpa-test1
#就是用来匹配需要监控器类型和名称
#资源占用率高,要扩容,扩容的数量
#资源占用率低,要缩容,最小的保持数量
#定义扩缩容的指标,阈值。minReplicas: 1maxReplicas: 6targetCPUUtilizationPercentage: 50
#占用50%的cpu[root@master01 hap]# kubectl logs -f hpa-test1-7558dfb467-bv85l 
Loaded plugins: fastestmirror, ovl
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
Determining fastest mirrors
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container error was
14: curl#6 - "Could not resolve host: mirrorlist.centos.org; Unknown error"[root@master01 hap]# cd /etc/yum.repos.d/
[root@master01 yum.repos.d]# ls
backup            CentOS-Debuginfo.repo  CentOS-Vault.repo  kubernetes.repo
Centos-7.repo     CentOS-fasttrack.repo  docker-ce.repo     local.repo
CentOS-Base.repo  CentOS-Media.repo      epel.repo
CentOS-CR.repo    CentOS-Sources.repo    epel-testing.repo
[root@master01 yum.repos.d]# rm -rf local.repo ------------------------报错,yum问题------------
[root@master01 hap]# kubectl logs -f hpa-test1-7558dfb467-r8t9t 
Loaded plugins: fastestmirror, ovl
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
Determining fastest mirrors
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container error was
14: curl#6 - "Could not resolve host: mirrorlist.centos.org; Unknown error"One of the configured repositories failed (Unknown),and yum doesn't have enough cached data to continue. At this point the onlysafe thing yum can do is fail. There are a few ways to work "fix" this:1. Contact the upstream for the repository and get them to fix the problem.2. Reconfigure the baseurl/etc. for the repository, to point to a workingupstream. This is most often useful if you are using a newerdistribution release than is supported by the repository (and thepackages for the previous distribution release still work).3. Run the command with the repository temporarily disabledyum --disablerepo=<repoid> ...4. Disable the repository permanently, so yum won't use it by default. Yumwill then just ignore the repository until you permanently enable itagain or use --enablerepo for temporary usage:yum-config-manager --disable <repoid>orsubscription-manager repos --disable=<repoid>5. Configure the failing repository to be skipped, if it is unavailable.Note that yum will try to contact the repo. when it runs most commands,so will have to try and fail each time (and thus. yum will be be muchslower). If it is a very temporary problem though, this is often a nicecompromise:yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=trueCannot find a valid baseurl for repo: base/7/x86_64
[root@master01 hap]# kubectl delete -f hpa.yaml 
deployment.apps "hpa-test1" deleted
horizontalpodautoscaler.autoscaling "hpa-centos1" deleted
[root@master01 hap]# kubectl apply -f hpa.yaml 
deployment.apps/hpa-test1 created
horizontalpodautoscaler.autoscaling/hpa-centos1 created
[root@master01 hap]# kubectl apply -f hpa.yaml 
deployment.apps/hpa-test1 created
horizontalpodautoscaler.autoscaling/hpa-centos1 created
[root@master01 hap]# kubectl get pod -o wide
NAME                             READY   STATUS        RESTARTS   AGE   IP             NODE       NOMINATED NODE   READINESS GATES
hpa-test1-7558dfb467-r8t9t       0/1     Error         0          2s    10.244.1.3     node01     <none>           <none>
---------------------------------------------
查看日志发现yum源找不到--------------------所有节点都需要配置yum源------------
[root@master01 yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
--2024-09-12 10:58:54--  http://mirrors.aliyun.com/repo/Centos-7.repo
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 58.222.47.242, 58.222.47.240, 121.228.130.155, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|58.222.47.242|:80... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:2523 (2.5K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/CentOS-Base.repo”100%[========================================>] 2,523       --.-K/s 用时 0s      2024-09-12 10:58:54 (250 MB/s) - 已保存 “/etc/yum.repos.d/CentOS-Base.repo” [2523/2523])[root@node01 yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@node02 yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
---------------------重新配置yum源-----------------------------------------或者文本不配置epel-------------[root@master01 hap]# vim hpa.yamlapiVersion: apps/v1
kind: Deployment
metadata:name: hpa-test1labels:hpa: test1
spec:replicas: 1selector:matchLabels:hpa: test1template:metadata:labels:hpa: test1spec:containers:- name: centosimage: centos:7command: ["/bin/bash", "-c", "yum install -y stress --nogpgcheck && sleep 3600"]volumeMounts:- name: yum1mountPath: /etc/yum.repos.d/resources:limits:cpu: "1"memory: 512Mivolumes:- name: yum1hostPath:path: /etc/yum.repos.d/
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:name: hpa-centos1
spec:scaleTargetRef:apiVersion: apps/v1kind: Deploymentname: hpa-test1
#就是用来匹配需要监控器类型和名称
#资源占用率高,要扩容,扩容的数量
#资源占用率低,要缩容,最小的保持数量
#定义扩缩容的指标,阈值。minReplicas: 1maxReplicas: 6targetCPUUtilizationPercentage: 50
#占用50%的cpu
-----------------------------------
[root@master01 hap]# kubectl delete -f hpa.yaml 
deployment.apps "hpa-test1" deleted
horizontalpodautoscaler.autoscaling "hpa-centos1" deleted
[root@master01 hap]# kubectl apply -f hpa.yaml 
deployment.apps/hpa-test1 created
horizontalpodautoscaler.autoscaling/hpa-centos1 created
[root@master01 hap]# kubectl get pod -o wide
NAME                             READY   STATUS    RESTARTS   AGE    IP             NODE       NOMINATED NODE   READINESS GATES
hpa-test1-7558dfb467-zwhh7       1/1     Running   0          4s     10.244.2.16    node02     <none>           <none>
[root@master01 opt]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   0%/50%    1         6         1          29m-------------------------------------
[root@master01 hap]# kubectl logs -f hpa-test1-7558dfb467-zwhh7 
Loaded plugins: fastestmirror, ovl
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
Repository contrib is listed more than once in the configuration
Determining fastest mirrors* base: mirrors.aliyun.com* epel: repo.jing.rocks* extras: mirrors.aliyun.com* updates: mirrors.aliyun.com
http://mirrors.aliyuncs.com/centos/7/os/x86_64/repodata/6d0c3a488c282fe537794b5946b01e28c7f44db79097bb06826e1c0c88bad5ef-primary.sqlite.bz2: [Errno 14] curl#7 - "Failed connect to mirrors.aliyuncs.com:80; Connection refused"
Trying other mirror.
Resolving Dependencies
--> Running transaction check
---> Package epel-release.noarch 0:7-14 will be installed
--> Finished Dependency ResolutionDependencies Resolved================================================================================Package                Arch             Version           Repository      Size
================================================================================
Installing:epel-release           noarch           7-14              epel            15 kTransaction Summary
================================================================================
Install  1 PackageTotal download size: 15 k
Installed size: 25 k
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transactionInstalling : epel-release-7-14.noarch                                     1/1 
warning: /etc/yum.repos.d/epel-testing.repo created as /etc/yum.repos.d/epel-testing.repo.rpmnew
warning: /etc/yum.repos.d/epel.repo created as /etc/yum.repos.d/epel.repo.rpmnewVerifying  : epel-release-7-14.noarch                                     1/1 Installed:epel-release.noarch 0:7-14                                                    Complete!
Loaded plugins: fastestmirror, ovl
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
Repository contrib is listed more than once in the configuration
Loading mirror speeds from cached hostfile* base: mirrors.aliyun.com* epel: repo.jing.rocks* extras: mirrors.aliyun.com* updates: mirrors.aliyun.com
Resolving Dependencies
--> Running transaction check
---> Package stress.x86_64 0:1.0.4-16.el7 will be installed
--> Finished Dependency ResolutionDependencies Resolved================================================================================Package          Arch             Version                 Repository      Size
================================================================================
Installing:stress           x86_64           1.0.4-16.el7            epel            39 kTransaction Summary
================================================================================
Install  1 PackageTotal download size: 39 k
Installed size: 94 k
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transactionInstalling : stress-1.0.4-16.el7.x86_64                                   1/1 
install-info: No such file or directory for /usr/share/info/stress.infoVerifying  : stress-1.0.4-16.el7.x86_64                                   1/1 Installed:stress.x86_64 0:1.0.4-16.el7                                                  Complete!
----------------------------------------------[root@master01 hap]# kubectl get pod -o wide
NAME                         READY   STATUS    RESTARTS   AGE    IP             NODE     NOMINATED NODE   READINESS GATES
hpa-test1-7558dfb467-zwhh7   1/1     Running   0          35m    10.244.2.16    node02   <none>           <none>
nfs1-76f66b958-68wpl         1/1     Running   0          6d1h   10.244.2.173   node02   <none>           <none>[root@master01 hap]# kubectl top node node02
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
node02   75m (k8s使用)         1%  (所有)   2845Mi          36%     
[root@master01 hap]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1  (目前使用) 0%/50%(设定的阈值)    1(最小)         6(最多)         1(目前副本数)          36mtop
到node02
按1
查看cpu使用情况[root@master01 hap]# kubectl exec -it hpa-test1-7558dfb467-zwhh7 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.[root@hpa-test1-7558dfb467-zwhh7 /]# stress -c 4
stress: info: [67] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd[root@master01 opt]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   88%/50%   1         6         2          43m[root@master01 yum.repos.d]# kubectl get pod -o wide
NAME                         READY   STATUS    RESTARTS   AGE    IP             NODE     NOMINATED NODE   READINESS GATES
hpa-test1-7558dfb467-rj7h5   1/1     Running   0          44m    10.244.1.4     node01   <none>           <none>

在这里插入图片描述

[root@hpa-test1-7558dfb467-zwhh7 /]# stress -c 4
stress: info: [67] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd

在这里插入图片描述

[root@master01 opt]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   88%/50%   1         6         2          43m
[root@master01 opt]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   49%/50%   1         6         4          43m
[root@master01 opt]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   49%/50%   1         6         4          43m[root@master01 opt]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   25%/50%   1         6         4          44m
[root@master01 opt]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   25%/50%   1         6         4          44m
[root@master01 opt]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   25%/50%   1         6         4          44m
[root@master01 opt]# kubectl get hpa[root@master01 opt]# kubectl get pod -o wide
NAME                         READY   STATUS    RESTARTS   AGE     IP             NODE       NOMINATED NODE   READINESS GATES
hpa-test1-7558dfb467-cfn8l   1/1     Running   0          2m33s   10.244.2.17    node02     <none>           <none>
hpa-test1-7558dfb467-rj7h5   1/1     Running   0          3m34s   10.244.1.4     node01     <none>           <none>
hpa-test1-7558dfb467-swv7m   1/1     Running   0          2m33s   10.244.0.42    master01   <none>           <none>
hpa-test1-7558dfb467-zwhh7   1/1     Running   0          45m     10.244.2.16    node02     <none>           <none>
nfs1-76f66b958-68wpl         1/1     Running   0          6d1h    10.244.2.173   node02     <none>           <none>查看扩容的pod的日志
[root@master01 opt]# kubectl logs -f hpa-test1-7558dfb467-cfn8l [root@master01 opt]# kubectl exec -it hpa-test1-7558dfb467-rj7h5 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@hpa-test1-7558dfb467-rj7h5 /]# stress -c 4
stress: info: [65] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hddtop
到node01
按1
查看cpu使用情况[root@master01 yum.repos.d]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   65%/50%   1         6         5          54m
[root@master01 yum.repos.d]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   65%/50%   1         6         5          54m
[root@master01 yum.repos.d]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   65%/50%   1         6         5          54m
[root@master01 yum.repos.d]# kubectl get pod -o wide
NAME                         READY   STATUS    RESTARTS   AGE    IP             NODE       NOMINATED NODE   READINESS GATES
hpa-test1-7558dfb467-bq64d   1/1     Running   0          77s    10.244.2.19    node02     <none>           <none>
hpa-test1-7558dfb467-ns6hs   1/1     Running   0          31s    10.244.1.5     node01     <none>           <none>
hpa-test1-7558dfb467-rj7h5   1/1     Running   0          12m    10.244.1.4     node01     <none>           <none>
hpa-test1-7558dfb467-swv7m   1/1     Running   0          11m    10.244.0.42    master01   <none>           <none>
hpa-test1-7558dfb467-zl7hf   1/1     Running   0          77s    10.244.2.18    node02     <none>           <none>
nfs1-76f66b958-68wpl         1/1     Running   0          6d1h   10.244.2.173   node02     <none>           <none>停止运行
stress: info: [66] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd
^C对应节点查看top
[root@hpa-test1-7558dfb467-swv7m /]# stress -c 4
stress: info: [66] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd
^C[root@master01 yum.repos.d]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   0%/50%    1         6         5          60m[root@hpa-test1-7558dfb467-rj7h5 /]# stress -c 4
stress: info: [65] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd
^C[root@master01 yum.repos.d]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   0%/50%    1         6         5          60m[root@master01 yum.repos.d]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   0%/50%    1         6         4          61m[root@master01 yum.repos.d]# kubectl get hpa
NAME          REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos1   Deployment/hpa-test1   0%/50%    1         6         1          85m
[root@master01 yum.repos.d]# kubectl get pod -o wide
NAME                         READY   STATUS    RESTARTS   AGE    IP             NODE     NOMINATED NODE   READINESS GATES
hpa-test1-7558dfb467-rj7h5   1/1     Running   0          44m    10.244.1.4     node01   <none>           <none>
nfs1-76f66b958-68wpl         1/1     Running   0          6d2h   10.244.2.173   node02   <none>           <none>

创建命名空间并做资源限制

[root@master01 etc]# cd /opt/
[root@master01 opt]# kubectl create ns xy102
namespace/xy102 created
[root@master01 opt]# vim xy102.yamlapiVersion: v1
kind: ResourceQuota
metadata:name: ns-xy102namespace: xy102
#对指定的命令空间进行资源限制
spec:hard:pods: "20"
#设置该命名空间可以创建pod的最大数量requests.cpu: "2"
#只能使用2个cpurequests.memory: 1Gi
#只能使用1Gi的内存limits.cpu: "3"limits.memory: 2Gi
#limits最多能使用多少configmaps: "10"
#创建configmap的数量限制persistentvolumeclaims: "4"
#创建pvc请求的限制secrets: "10"
#创建加密配置文件的限制services: "10"
#创建services的限制
~                          [root@master01 opt]# kubectl apply -f xy102.yaml 
resourcequota/ns-xy102 created
[root@master01 opt]# kubectl describe namespaces xy102 
Name:         xy102
Labels:       <none>
Annotations:  <none>
Status:       ActiveResource QuotasName:                   ns-xy102Resource                Used  Hard--------                ---   ---configmaps              1     10limits.cpu              0     3limits.memory           0     2Gipersistentvolumeclaims  0     4pods                    0     20requests.cpu            0     2requests.memory         0     1Gisecrets                 1     10services                0     10No LimitRange resource.

limitRange:

只要是创建在这个命令空间的pod,都会根据limitRange的配置来对所有的pod进行统一的资源限制

[root@master01 opt]# vim xy102.te.yaml apiVersion: v1
kind: LimitRange
metadata:name: xy102-limitnamespace: xy102
#指定命名空间
spec:limits:- default:
#直接加default就相当于是上限memory: 1Gicpu: "4"defaultRequest:
#这个就是软限制:memory: 1Gicpu: "4"type: Container
#指定类型:Container pod pvc
~                             [root@master01 opt]# kubectl apply -f xy102.te.yaml 
limitrange/xy102-limit created
[root@master01 opt]# kubectl describe namespaces xy102 
Name:         xy102
Labels:       <none>
Annotations:  <none>
Status:       ActiveResource QuotasName:                   ns-xy102Resource                Used  Hard--------                ---   ---configmaps              1     10limits.cpu              0     3limits.memory           0     2Gipersistentvolumeclaims  0     4pods                    0     20requests.cpu            0     2requests.memory         0     1Gisecrets                 1     10services                0     10Resource LimitsType       Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio----       --------  ---  ---  ---------------  -------------  -----------------------Container  cpu       -    -    4                4              -Container  memory    -    -    1Gi              1Gi            -

作业:

stateful 能够实现扩缩容

最小的pod数是1,但是副本的数量定义是3,那么缩容会缩到3还是1.


[root@master01 hap]# vim hpa1.yaml apiVersion: apps/v1
kind: StatefulSet
metadata:name: hpa-test2labels:hpa: test2
spec:serviceName: "my-headless-service"replicas: 3selector:matchLabels:hpa: test2template:metadata:labels:hpa: test2spec:containers:- name: centosimage: centos:7command: ["/bin/bash", "-c", "yum install -y epel-release --nogpgcheck && yum install -y stress --nogpgcheck && sleep 3600"]volumeMounts:- name: yum2mountPath: /etc/yum.repos.d/resources:limits:cpu: "1"memory: 512Mivolumes:- name: yum2hostPath:path: /etc/yum.repos.d/
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:name: hpa-centos2
spec:scaleTargetRef:apiVersion: apps/v1kind: StatefulSetname: hpa-test2
#就是用来匹配需要监控器类型和名称
#资源占用率高,要扩容,扩容的数量
#资源占用率低,要缩容,最小的保持数量
#定义扩缩容的指标,阈值。minReplicas: 1maxReplicas: 6targetCPUUtilizationPercentage: 50
#占用50%的cpu
---
apiVersion: v1
kind: Service
metadata:name: my-headless-service
spec:ports:- port: 80name: httpclusterIP: Noneselector:hpa: test2[root@master01 hap]# kubectl apply -f hpa1.yaml 
statefulset.apps/hpa-test2 created
horizontalpodautoscaler.autoscaling/hpa-centos2 configured
service/my-headless-service created[root@master01 hap]# kubectl get pod -o wide
NAME                   READY   STATUS    RESTARTS   AGE    IP             NODE       NOMINATED NODE   READINESS GATES
hpa-test2-0            1/1     Running   0          105s   10.244.2.20    node02     <none>           <none>
hpa-test2-1            1/1     Running   0          104s   10.244.1.6     node01     <none>           <none>
hpa-test2-2            1/1     Running   0          102s   10.244.0.43    master01   <none>           <none>准备进入node02

准备进入node02

在这里插入图片描述

[root@master01 hap]# kubectl exec -it hpa-test2-0 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@hpa-test2-0 /]# stress -c 4
stress: info: [66] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd

在这里插入图片描述

[root@master01 hap]# kubectl get hpa
NAME          REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos2   StatefulSet/hpa-test2   33%/50%   1         6         3          11m

等了一会,坐等缩容

[root@master01 hap]# kubectl get hpa
NAME          REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos2   StatefulSet/hpa-test2   49%/50%   1         6         2          12m[root@master01 hap]# kubectl get pod -o wide
NAME                   READY   STATUS    RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
hpa-test2-0            1/1     Running   0          7m41s   10.244.2.20    node02   <none>           <none>
hpa-test2-1            1/1     Running   0          7m40s   10.244.1.6     node01   <none>           <none>
nfs1-76f66b958-68wpl   1/1     Running   0          6d2h    10.244.2.173   node02   <none>           <none>

停止cpu高负荷

[1]+  Stopped                 stress -c 4

在这里插入图片描述

[root@master01 hap]# kubectl get hpa
NAME          REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos2   StatefulSet/hpa-test2   0%/50%    1         6         2          16m[root@master01 hap]# kubectl get hpa
NAME          REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-centos2   StatefulSet/hpa-test2   0%/50%    1         6         1          21m[root@master01 hap]# kubectl get pod -o wide
NAME                   READY   STATUS    RESTARTS   AGE    IP             NODE     NOMINATED NODE   READINESS GATES
hpa-test2-0            1/1     Running   0          16m    10.244.2.20    node02   <none>           <none>
nfs1-76f66b958-68wpl   1/1     Running   0          6d3h   10.244.2.173   node02   <none>           <none>

在这里插入图片描述

# 作业:

Statefulset 能够实现扩缩容

最小的pod数是1,但是副本的数量定义是3,那么缩容会缩到3还是1

[root@master01 hap]# vim hpa1.yaml

在这里插入图片描述

[root@master01 hap]# kubectl apply -f hpa1.yaml

2、[root@master01 hap]# kubectl get pod -o wide
在这里插入图片描述

3、[root@master01 hap]# kubectl get hpa

在这里插入图片描述

4、开始缩容

在这里插入图片描述

5、最终副本数变为1

在这里插入图片描述

报错

作业中存在报错

[root@master01 hap]# kubectl apply -f hpa1.yaml 
error: error validating "hpa1.yaml": error validating data: ValidationError(StatefulSet.spec): missing required field "serviceName" in io.k8s.api.apps.v1.StatefulSetSpec; if you choose to ignore these errors, turn validation off with --validate=false

求助ai

这个错误信息指出在你的 StatefulSet 配置(hpa1.yaml 文件)中缺少了必需的字段 serviceName。在 StatefulSet 的规格(spec)中,serviceName 是一个必需的字段,它用于指定与 StatefulSet 中的 Pods 相关联的 headless service 的名称。这个 headless service 负责在 DNS 中为 StatefulSet 的 Pods 提供稳定的网络身份。

要解决这个问题,你需要按照以下步骤操作:

  1. 检查并更新你的 YAML 文件
    在你的 hpa1.yaml 文件中,找到 StatefulSet 的定义部分,并确保在 spec 下有一个 serviceName 字段,其值应该是一个已经存在的 headless service 的名称。例如:

    apiVersion: apps/v1  
    kind: StatefulSet  
    metadata:  name: my-statefulset  
    spec:  serviceName: "my-headless-service"  # 确保这个 service 已经存在  replicas: 3  selector:  matchLabels:  app: myapp  template:  metadata:  labels:  app: myapp  spec:  containers:  - name: mycontainer  image: myimage  # ... 其他容器配置  # ... 其他 StatefulSet 配置
    
  2. 创建或验证 headless service
    确保名为 my-headless-service 的 headless service 已经存在。Headless service 的定义与普通 service 类似,但不需要指定 type 字段或 clusterIP(它会自动设置为 None)。如果你还没有创建这个 service,你需要先创建一个,如下所示:

    apiVersion: v1  
    kind: Service  
    metadata:  name: my-headless-service  
    spec:  ports:  - port: 80  name: http  clusterIP: None  selector:  app: myapp
    
  3. 重新应用 YAML 文件
    在更新了 YAML 文件并确保了 headless service 的存在后,使用以下命令重新应用你的 YAML 文件:

    bash复制代码kubectl apply -f hpa1.yaml
    
  4. 检查状态
    使用以下命令检查 StatefulSet 和 headless service 的状态:

    kubectl get statefulsets  
    kubectl get svc my-headless-service
    

如果你遵循了这些步骤但仍然遇到问题,可能需要检查你的 Kubernetes 集群的状态或查看更详细的错误日志。另外,请注意文件名 hpa1.yaml 通常用于 HorizontalPodAutoscaler(HPA)的配置,但在这个上下文中,你似乎是在配置一个 StatefulSet。如果这只是一个文件名的混淆,建议将其更改为更合适的名称,如 statefulset.yaml

二、helm

helm:

nginx ingress

deployment

svc

ingress

helm提供了一个模版,可以一键化的部署微服务。
通过打包的方式,把所有需要的yaml文件集合在一起,然后一键部署。

可以支持回滚。

helm

k8s的部署可以配置,可以集成,可以动态的修改。

三个概念:

Chart:部署安装k8s微服务的模板,类似于linux里面的rpm包。

Repository:仓库,仓库用来保存Chart

Release:当我们使用Chart部署微服务时。每部署一次就会有一个release。理解为版本号。

在这里插入图片描述

在这里插入图片描述

[root@master01 opt]# tar -xf helm-v3.10.0-linux-amd64.tar.gz 
[root@master01 opt]# cd linux-amd64/
[root@master01 linux-amd64]# mv helm /usr/local/bin/
##做一下helm自动补全
[root@master01 linux-amd64]# vim /etc/profilesource <(helm completion bash)[root@master01 linux-amd64]# source /etc/profile#helm查看版本号
[root@master01 linux-amd64]# helm version
version.BuildInfo{Version:"v3.10.0", GitCommit:"ce66412a723e4d89555dc67217607c6579ffcb21", GitTreeState:"clean", GoVersion:"go1.18.6"}##安装仓库和模板
----------------
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add stable http://mirror.azure.cn/kubernetes/charts
helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm repo add incubator https://charts.helm.sh/incubator
-----------------------------[root@master01 linux-amd64]# helm repo add bitnami https://charts.bitnami.com/bitnami
[root@master01 linux-amd64]#helm repo add stable http://mirror.azure.cn/kubernetes/charts
[root@master01 linux-amd64]#helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/ charts
[root@master01 linux-amd64]#helm repo add incubator https://charts.helm.sh/incubator##仓库里面有模版-----chart
# helm repo update #更新当前所有仓库的chart
[root@master01 helm]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "aliyun" chart repository
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈查看仓库
[root@master01 linux-amd64]# helm repo list
NAME   	URL                                                   
bitnami	https://charts.bitnami.com/bitnami                    
aliyun 	https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
stable 	http://mirror.azure.cn/kubernetes/charts       
查看nginx
[root@master01 linux-amd64]# helm search repo aliyun | grep nginx
aliyun/nginx-ingress          	0.9.5        	0.10.2       	An nginx Ingress controller that uses ConfigMap...
aliyun/nginx-lego             	0.3.1        	             	Chart for nginx-ingress-controller and kube-lego  [root@master01 linux-amd64]# helm search repo stable | grep nginx
stable/nginx-ingress                 	1.41.3       	v0.34.1                	DEPRECATED! An nginx Ingress controller that us...
stable/nginx-ldapauth-proxy          	0.1.6        	1.13.5                 	DEPRECATED - nginx proxy with ldapauth            
stable/nginx-lego                    	0.3.1        	                       	Chart for nginx-ingress-controller and kube-lego  查看chart模板详情
[root@master01 linux-amd64]# helm show chart aliyun/nginx-lego
apiVersion: v1
deprecated: true
description: Chart for nginx-ingress-controller and kube-lego
keywords:
- kube-lego
- nginx-ingress-controller
- nginx
- letsencrypt
maintainers:
- email: jack.zampolin@gmail.comname: Jack Zampolin
name: nginx-lego
sources:
- https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx
- https://github.com/jetstack/kube-lego/tree/master/examples/nginx
version: 0.3.1##删除仓库
[root@master01 linux-amd64]# helm repo remove aliyun[root@master01 linux-amd64]# helm install redis1 stable/redis -n default[root@master01 linux-amd64]# kubectl get pod
NAME                   READY   STATUS    RESTARTS   AGE
hpa-test2-0            1/1     Running   1          66m
hpa-test2-1            1/1     Running   0          106s
nfs1-76f66b958-68wpl   1/1     Running   0          6d3h
redis1-master-0        0/1     Pending   0          9s
redis1-slave-0         0/1     Pending   0          9s[root@master01 linux-amd64]# kubectl describe pod redis1-master-0 [root@master01 linux-amd64]# helm ls
NAME  	NAMESPACE	REVISION	UPDATED                               	STATUS  	CHART       	APP VERSION
redis1	default  	1       	2024-09-12 13:52:43.74847638 +0800 CST	deployed	redis-10.5.7	5.0.7  
[root@master01 linux-amd64]# helm uninstall redis1 
release "redis1" uninstalled[root@master01 linux-amd64]# kubectl get pod
NAME                   READY   STATUS    RESTARTS   AGE
hpa-test2-0            1/1     Running   1          69m
hpa-test2-1            1/1     Running   0          4m13s
nfs1-76f66b958-68wpl   1/1     Running   0          6d3h[root@master01 linux-amd64]# cd /opt/
[root@master01 opt]# mkdir helm
[root@master01 opt]# cd helm/
[root@master01 helm]# helm create nginx1
[root@master01 helm]# yum -y install tree[root@master01 helm]# tree nginx1nginx1/
├── charts           #依赖环境,一般为空,也不需要
├── Chart.yaml   #包含chart的元信息,chart的版本,名称等等
├── templates     #包含了部署k8s的应用pod的模版文件
│   ├── deployment.yaml   ##基于控制器
│   ├── _helpers.tpl
│   ├── hpa.yaml   ##做hpa的监控,自动伸缩
│   ├── ingress.yaml  ##对外访问
│   ├── NOTES.txt    
│   ├── serviceaccount.yaml  ##创建服务账号
│   ├── service.yaml       ##创建service的清单
│   └── tests
│       └── test-connection.yaml
└── values.yaml   ##我们的配置在values里面完成,集合在这个配置里面,当配置完成之后,可以通过values配置把参数传给templates里面的模板文件,进行覆盖。

ingress-nginx

[root@master01 helm]# ls
nginx1  nginx1-0.1.0.tgz
[root@master01 helm]# cd nginx1/
[root@master01 nginx1]# ls
charts  Chart.yaml  templates  values.yaml
[root@master01 nginx1]# vim values.yaml replicaCount: 3  ##更改副本数tag: "1.22"
#tag:使用镜像的版本号,不改默认最新ingress:enabled: truehosts:- host: www.xy102.compaths:- path: /pathType: Prefix
resources:limits:cpu: 100mmemory: 128Miautoscaling:enabled: trueminReplicas: 1maxReplicas: 6targetCPUUtilizationPercentage: 80[root@master01 helm]# helm lint nginx1
==> Linting nginx1
[INFO] Chart.yaml: icon is recommended
[WARNING] templates/hpa.yaml: autoscaling/v2beta1 HorizontalPodAutoscaler is deprecated in v1.22+, unavailable in v1.25+; use autoscaling/v2 HorizontalPodAutoscaler1 chart(s) linted, 0 chart(s) failed[root@master01 helm]# helm package nginx1/
Successfully packaged chart and saved it to: /opt/helm/nginx1-0.1.0.tgz#第一次部署,release就是版本号。版本号1
[root@master01 helm]# helm install nginx1 /opt/helm/nginx1-0.1.0.tgz 
NAME: nginx1
LAST DEPLOYED: Thu Sep 12 14:09:44 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:http://www.xy102.com/##删除pod
[root@master01 nginx1]#  helm uninstall nginx1 
release "nginx1" uninstalled[root@master01 helm]# kubectl edit deployments.apps nginx1 [root@master01 helm]# kubectl get svc -o wide
NAME                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE     SELECTOR
kubernetes            ClusterIP   10.96.0.1      <none>        443/TCP   17d     <none>
my-headless-service   ClusterIP   None           <none>        80/TCP    170m    hpa=test2
nginx1                ClusterIP   10.96.180.35   <none>        80/TCP    2m15s   app.kubernetes.io/instance=nginx1,app.kubernetes.io/name=nginx1[root@master01 helm]# curl 10.96.180.35##发现没有开放ingress-nginx[root@master01 ingress]# kubectl apply -f mandatory.yaml 
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
role.rbac.authorization.k8s.io/nginx-ingress-role created
Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created
[root@master01 ingress]# ls
auth  https  ingress-nginx1.yaml  mandatory.yaml  service-nodeport.yaml  traefik
[root@master01 ingress]# kubectl apply -f service-nodeport.yaml 
service/ingress-nginx created
[root@master01 ingress]# kubectl get svc
NAME                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes            ClusterIP   10.96.0.1      <none>        443/TCP   17d
my-headless-service   ClusterIP   None           <none>        80/TCP    3h45m
nginx1                ClusterIP   10.96.167.13   <none>        80/TCP    14m
[root@master01 ingress]# kubectl get svc -n ingress-nginx 
NAME            TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.96.26.61   <none>        80:31895/TCP,443:30785/TCP   9s
[root@master01 ingress]# vim /etc/hosts
192.168.168.81 master01 www.xy102.com www.test1.com[root@master01 ingress]# curl www.test1.com
404 page not found
[root@master01 ingress]# curl www.test1.com:31895
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>

NodePort

[root@master01 nginx1]# vim values.yaml service:type: NodePortport: 80
#如果自定义nodeport的端口,需要改配置文件nodePort: 30001ingress:enabled: falseclassName: ""annotations: {}[root@master01 templates]# vim service.yaml apiVersion: v1
kind: Service
metadata:name: {{ include "nginx1.fullname" . }}labels:{{- include "nginx1.labels" . | nindent 4 }}
spec:type: {{ .Values.service.type }}ports:- port: {{ .Values.service.port }}targetPort: httpprotocol: TCPname: httpnodePort: {{ .Values.service.nodePort}}[root@master01 helm]# helm upgrade nginx1 nginx1
Release "nginx1" has been upgraded. Happy Helming!
NAME: nginx1
LAST DEPLOYED: Thu Sep 12 15:48:21 2024
NAMESPACE: default
STATUS: deployed
REVISION: 2
NOTES:
1. Get the application URL by running these commands:export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services nginx1)export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")echo http://$NODE_IP:$NODE_PORT[root@master01 helm]# kubectl get svc
NAME                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes            ClusterIP   10.96.0.1      <none>        443/TCP        17d
my-headless-service   ClusterIP   None           <none>        80/TCP         3h3m
nginx1                NodePort    10.96.180.35   <none>        80:30001/TCP   15m[root@master01 helm]# curl 192.168.168.81:30001
[root@master01 helm]# curl www.xy102.com:30001
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>[root@master01 helm]# helm history nginx1 
REVISION	UPDATED                 	STATUS    	CHART       	APP VERSION	DESCRIPTION     
1       	Thu Sep 12 15:34:11 2024	superseded	nginx1-0.1.0	1.16.0     	Install complete
2       	Thu Sep 12 15:48:21 2024	deployed  	nginx1-0.1.0	1.16.0     	Upgrade completenginx1回滚到1
[root@master01 helm]# helm rollback nginx1 1
Rollback was a success! Happy Helming!
[root@master01 helm]# kubectl get ingress
NAME     CLASS    HOSTS           ADDRESS   PORTS   AGE
nginx1   <none>   www.xy102.com             80      58s

ne> 80:30001/TCP 15m

[root@master01 helm]# curl 192.168.168.81:30001
[root@master01 helm]# curl www.xy102.com:30001

Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

[root@master01 helm]# helm history nginx1
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Thu Sep 12 15:34:11 2024 superseded nginx1-0.1.0 1.16.0 Install complete
2 Thu Sep 12 15:48:21 2024 deployed nginx1-0.1.0 1.16.0 Upgrade complete

nginx1回滚到1
[root@master01 helm]# helm rollback nginx1 1
Rollback was a success! Happy Helming!
[root@master01 helm]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx1 www.xy102.com 80 58s



http://www.ppmy.cn/ops/112061.html

相关文章

基于ssm+vue+uniapp的新生报到系统小程序

开发语言&#xff1a;Java框架&#xff1a;ssmuniappJDK版本&#xff1a;JDK1.8服务器&#xff1a;tomcat7数据库&#xff1a;mysql 5.7&#xff08;一定要5.7版本&#xff09;数据库工具&#xff1a;Navicat11开发软件&#xff1a;eclipse/myeclipse/ideaMaven包&#xff1a;M…

Arduino IDE离线配置第三方库文件-ESP32开发板

简洁版可以使用uget等&#xff0c;将文件下载到对应文件夹下&#xff0c;然后安装。 esp32之arduino配置下载提速 录屏 Arduino IDE离线配置第三方库文件ESP32 资源 Linux https://download.csdn.net/download/ZhangRelay/89749063 第三方开发板 非默认支持的开发板 linu…

鸿蒙NEXT生态应用核心技术理念:一次开发,多端部署

在万物互联时代&#xff0c;应用开发者也面临设备底座从手机单设备到全场景多设备的转变&#xff0c;通过全场景多设备作为全新的底座&#xff0c;为消费者带来万物互联时代更为高效、便捷的体验。 在万物智联时代重要机遇期&#xff0c;鸿蒙结合移动生态发展的趋势&#xff0…

HarmonyOS NEXT 封装实现好用的网络模块(基于最新5.0的API12)

在 HarmonyOS-NEXT 开发中&#xff0c;网络请求是应用开发中不可或缺的一部分。为了提高开发效率和代码复用性&#xff0c;我们可以封装一个好用的网络模块组件。本文将介绍如何在 HarmonyOS-NEXT 中封装一个功能强大且易于使用的网络模块组件。 封装目的 网络模块使用的频率最…

并发容器(Map、List、Set)实战及其原理分析

1. JUC包下的并发容器 Java的集合容器框架中&#xff0c;主要有四大类别&#xff1a;List、Set、Queue、Map&#xff0c;大家熟知的这些集合类ArrayList、LinkedList、HashMap这些容器都是非线程安全的。 所以&#xff0c;Java先提供了同步容器供用户使用。同步容器可以简单地…

【玩转贪心算法专题】452. 用最少数量的箭引爆气球是【中等】

【玩转贪心算法专题】452. 用最少数量的箭引爆气球是【中等】 1、力扣链接 https://leetcode.cn/problems/minimum-number-of-arrows-to-burst-balloons/ 2、题目描述 有一些球形气球贴在一堵用 XY 平面表示的墙面上。墙面上的气球记录在整数数组 points &#xff0c;其中p…

【医药行业】实施SAP有哪些医药行业GXP的合规要求和注意事项

作为实施过辉瑞和赛诺菲医药行业的项目&#xff0c;总结了如下&#xff1a; 在医药行业中&#xff0c;GxP&#xff08;Good Practices&#xff0c;良好规范&#xff09;是一系列标准与指南&#xff0c;旨在确保制药、医疗设备和生物制品的质量与合规性。GxP包括多个领域&#x…

数据结构,栈,队列(线性表实现)

目录 栈: 35*6 栈的类型&#xff1a; 顺序栈&#xff1a; 功能函数实现&#xff1a;seqstack.c 头文件&#xff1a;seqstack.h 测试功能函数&#xff1a; 练习&#xff1a; 链式栈&#xff1a; 功能函数实现&#xff1a;linkstack.c 头文件&#xff1a;linkstack.h 测试功能…