k8s集群环境搭建(一主二从--kubeadm安装)

news/2024/9/19 4:52:10/ 标签: kubernetes, 容器, 云原生

前置条件

版本:CentOS Linux release 7.5.1804 (Core)

内存:2G

CPU:2

主机名解析
vim /etc/hosts
192.168.109.100 master
192.168.109.101 node1
192.168.109.102 node2时间同步,这里直接使用chronyd服务从网络同步时间systemctl start chronyd
systemctl enable chronyd
date
#查看时间是否同步禁用iptables和firewalld服务
#关闭防火墙和iptables
systemctl stop firewalld
systemctl disabled firewalldsystemctl stop iptables
systemctl disabled iptables禁用selinux
selinux是linux系统下的一个安全服务,如果不关闭它,在安装集群中会产生各种各样的奇葩问题~~~powershell
# 编辑 /etc/selinux/config 文件,修改SELINUX的值为disabled
# 注意修改完毕之后需要重启linux服务
vim /etc/selinux/config
SELINUX=disabled禁用swap分区# 编辑分区配置文件/etc/fstab,注释掉swap分区一行
# 注意修改完毕之后需要重启linux服务UUID=455cc753-7a60-4c17-a424-7741728c44a1 /boot    xfs     defaults        0 0/dev/mapper/centos-home /home                      xfs     defaults        0 0
# /dev/mapper/centos-swap swap                      swap    defaults        0 0修改linux的内核参数vim /etc/sysctl.d/kubernetes.confnet.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1# 重新加载配置
sysctl -p
# 加载网桥过滤模块
modprobe br_netfilter
# 查看网桥过滤模块是否加载成功    
lsmod | grep br_netfilter配置ipvs功能在kubernetes中service有两种代理模型,一种是基于iptables的,一种是基于ipvs的两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块~~~powershell
# 1 安装ipset和ipvsadm
[root@master ~]# yum install ipset ipvsadm -y# 2 添加需要加载的模块写入脚本文件
cat <<EOF >  /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF# 3 为脚本文件添加执行权限chmod +x /etc/sysconfig/modules/ipvs.modules# 4 执行脚本文件/bin/bash /etc/sysconfig/modules/ipvs.modules# 5 查看对应的模块是否加载成功
lsmod | grep -e ip_vs -e nf_conntrack_ipv4重启服务器
reboot

安装docker

切换镜像源

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

查看当前镜像源中支持的docker版本

yum list docker-ce --showduplicates

安装特定版本的docker-ce
#必须指定--setopt=obsoletes=0,否则yum会自动安装更高版本

yum install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7 -y

自己找一个上边列表中有的安装就行,这里以docker-ce-18.06.3.ce-3.el7为例

 添加一个配置文件
# Docker在默认情况下使用的Cgroup Driver为cgroupfs,而kubernetes推荐使用systemd来代替cgroupfs
 

mkdir /etc/docker
cat <<EOF >  /etc/docker/daemon.json
{"exec-opts": ["native.cgroupdriver=systemd"],"registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"]
}
EOF

启动docker
 

systemctl restart docker
systemctl enable docker

验证是否安装完成

docker version
docker ps -a

安装kubernetes组件

# 由于kubernetes的镜像源在国外,速度比较慢,这里切换成国内的镜像源
# 编辑/etc/yum.repos.d/kubernetes.repo,添加下面的配置 

vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

安装kubeadm、kubelet和kubectl
版本可以自定义,但需要保证三个主机的版本完全相同

 yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y

# 配置kubelet的cgroup
# 编辑/etc/sysconfig/kubelet,添加下面的配置
 

vim /etc/sysconfig/kubeletKUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"

设置kubelet开机自启

 systemctl enable kubelet

### 准备集群镜像

~~~powershell
# 在安装kubernetes集群之前,必须要提前准备好集群需要的镜像,所需镜像可以通过下面命令查看

kubeadm config images list

这个是建议的镜像版本,你出现多少版本就用多少,并按我下面这个格式修改版本

# 下载镜像
# 此镜像在kubernetes的仓库中,由于网络原因,无法连接,下面提供了一种替代方案images=(kube-apiserver:v1.17.4kube-controller-manager:v1.17.4kube-scheduler:v1.17.4kube-proxy:v1.17.4pause:3.1etcd:3.4.3-0coredns:1.6.5
)for imageName in ${images[@]} ; dodocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageNamedocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName 		k8s.gcr.io/$imageNamedocker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

下载卡住可重新执行for循环这段

集群初始化

下面开始对集群进行初始化,并将node节点加入到集群中

> 下面的操作只需要在`master`节点上执行即可

kubeadm init \--kubernetes-version=v1.17.4 \--pod-network-cidr=10.244.0.0/16 \--service-cidr=10.96.0.0/12 \--apiserver-advertise-address=192.168.109.100#ip改为你自己的masterip
# 创建必要文件
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
~~~

上述代码会在完成kubeadm init后自动弹出

> 下面的操作只需要在`node`节点上执行即可
# 将node节点加入集群
下边这段代码也在完成kubeadm init后自动弹出

kubeadm join 192.168.109.100:6443 \ --token 8507uc.o0knircuri8etnw2 \--discovery-token-ca-cert-hash \sha256:acc37967fb5b0acf39d7598f8a439cc7dc88f439a3f4d0c9cae88e7901b9d3f

查看集群状态

查看集群状态 此时的集群状态为NotReady,这是因为还没有配置网络插件
[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
master   NotReady   master   6m43s   v1.17.4
node1    NotReady   <none>   22s     v1.17.4
node2    NotReady   <none>   19s     v1.17.4

安装网络插件

kubernetes支持多种网络插件,比如flannel、calico、canal等等,任选一种使用即可,本次选择flannel

 wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 修改文件中quay.io仓库为quay-mirror.qiniu.com

# 使用配置文件启动fannel


kubectl apply -f kube-flannel.yml
# 稍等片刻,再次查看集群节点的状态
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   15m     v1.17.4
node1    Ready    <none>   8m53s   v1.17.4
node2    Ready    <none>   8m50s   v1.17.4

在此提供一个改好的yml,需要的话直接复制

vim kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:name: psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:privileged: falsevolumes:- configMap- secret- emptyDir- hostPathallowedHostPaths:- pathPrefix: "/etc/cni/net.d"- pathPrefix: "/etc/kube-flannel"- pathPrefix: "/run/flannel"readOnlyRootFilesystem: false# Users and groupsrunAsUser:rule: RunAsAnysupplementalGroups:rule: RunAsAnyfsGroup:rule: RunAsAny# Privilege EscalationallowPrivilegeEscalation: falsedefaultAllowPrivilegeEscalation: false# CapabilitiesallowedCapabilities: ['NET_ADMIN']defaultAddCapabilities: []requiredDropCapabilities: []# Host namespaceshostPID: falsehostIPC: falsehostNetwork: truehostPorts:- min: 0max: 65535# SELinuxseLinux:# SELinux is unused in CaaSPrule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: flannel
rules:- apiGroups: ['extensions']resources: ['podsecuritypolicies']verbs: ['use']resourceNames: ['psp.flannel.unprivileged']- apiGroups:- ""resources:- podsverbs:- get- apiGroups:- ""resources:- nodesverbs:- list- watch- apiGroups:- ""resources:- nodes/statusverbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:name: flannelnamespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-amd64namespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- amd64hostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-arm64namespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- arm64hostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-arm64command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-arm64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-armnamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- armhostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-armcommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-armcommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-ppc64lenamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- ppc64lehostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-ppc64lecommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-ppc64lecommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-s390xnamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- s390xhostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-s390xcommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-s390xcommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg

 服务部署
 

接下来在kubernetes集群中部署一个nginx程序,测试下集群是否在正常工作。以部署nginx为例

# 部署nginxkubectl create deployment nginx --image=nginx:1.14-alpine# 暴露端口kubectl expose deployment nginx --port=80 --type=NodePort
# 查看服务状态
[root@master ~]# kubectl get pods,service
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-86c57db685-fdc2k   1/1     Running   0          18mNAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        82m
service/nginx        NodePort    10.104.121.45   <none>        80:30073/TCP   17m

成功


http://www.ppmy.cn/news/1520833.html

相关文章

ESP32-IDF http请求崩溃问题分析与解决

文章目录 esp32s3 http请求崩溃问题代码讨论修正后不崩溃的代码esp32相关文章 ESP32S3板子, 一运行http请求百度网站的例子, 就会panic死机, 记录下出现及解决过程. esp32s3 http请求崩溃 一执行http请求的perform就会崩溃, 打印如图 ESP32-IDF 的http请求代码是根据官方dem…

【亚马逊云】注册登录AWS 合作伙伴网络(APN)操作流程

文章目录 1、什么是APN&#xff1f;2、登录AWS官网3、加入 AWS 合作伙伴网络4、登录 AWS 合作伙伴网络5、常见问题5.1 忘记密码5.2 修改信息 6、活动上新1️⃣「云上驰骋&#xff0c;考证无忧」云从业者认证考试优惠活动2️⃣ Amazon 动手实验3️⃣AWS Certified 助理级认证挑战…

[Tools: LoRA] Diffusers中Stable Diffusion的实现

实现底层原理 Diffusers中的Attention操作实现在AttnProcessor类&#xff08;diffusers.models.attention_processor.py&#xff09;&#xff0c;里面定义了单次Attention操作。添加LoRA&#xff0c;本质上是用LoRAAttnProcessor类替换AttnProcessor类。LoRAAttnProcessor中新…

强连通分量专题总结

~~~~~ 总题单链接 ~~~~~ 对于只需要考虑强连通分量的题&#xff0c;就可以用强连通分量&#xff08;大雾 ~~~~~ 我想了很久&#xff0c;确实没有什么好说的 … \ldots …

ECCV2024|RegionDrag:基于区域的图像编辑方法,通过手动拖拽实现图像编辑!

香港大学和牛津大学提出了一种使用扩散模型进行基于区域的快速图像编辑方法RegionDrag&#xff0c; RegionDrag 是一种基于区域的图像编辑方法&#xff0c;通过使用户能够通过 手柄和 目标区域表达指令&#xff0c;提供比点拖动方法更快、更精确的图像编辑&#xff0c;在速度上…

el-table利用折叠面板 type=“expand“ 嵌套el-table,并实现 明细数据多选,选中明细数据后返回原数据得嵌套格式

效果图: 废话不多说直接上代码&#xff0c;完整代码展示&#xff1a; <template><el-tableborderref"multipleTable":data"tableData"tooltip-effect"dark"style"width: 100%"><el-table-columnwidth"50"la…

Java | Leetcode Java题解之第385题迷你语法分析器

题目&#xff1a; 题解&#xff1a; class Solution {int index 0;public NestedInteger deserialize(String s) {if (s.charAt(index) [) {index;NestedInteger ni new NestedInteger();while (s.charAt(index) ! ]) {ni.add(deserialize(s));if (s.charAt(index) ,) {in…

创新之光闪耀,点赋科技在第十三届创新创业大赛中绽放光彩

近日&#xff0c;第十三届创新创业大赛决赛落下帷幕&#xff0c;这场充满激情与挑战的赛事吸引了众多优秀企业参与角逐。在激烈的竞争中&#xff0c;点赋科技脱颖而出&#xff0c;荣获第三名的佳绩。 创新创业大赛一直是企业展示实力、交流创新理念的重要平台。本次大赛中&…

前端防抖和节流函数的实现原理

在前端开发中&#xff0c;防抖&#xff08;Debounce&#xff09;和节流&#xff08;Throttle&#xff09;是两种常用的优化技术&#xff0c;它们主要用于减少事件处理函数的执行频率&#xff0c;从而提高程序性能和用户体验。 防抖&#xff08;Debounce&#xff09; 防抖的目…

iomuxc、pinctrl子系统、gpio子系统(学习总结)

iomuxc、pinctrl子系统、gpio子系统三者的关系 相互依赖&#xff1a;IOMUXC、pinctrl子系统和gpio子系统在功能上相互依赖。IOMUXC提供了引脚复用和电气属性的配置能力&#xff0c;pinctrl子系统负责从设备树中获取这些配置信息并完成初始化&#xff0c;而gpio子系统则在引脚被…

UE 【材质编辑】自定义材质节点

使用UE的材质编辑器&#xff0c;蓝图提供了大量的节点函数&#xff1a; 实际上&#xff0c;这是一段封装好的包含一串HLSL代码的容器。打开“Source/Runtime/Engine/Classes/Material”&#xff0c;可以看到很多不同节点的头文件&#xff1a; 照葫芦画瓢 以UMaterialExpressi…

notepad++将换行替换成空

将多行里的换行置为一行&#xff0c;例如将下面的6行置为3行 crrlH打开替换框&#xff0c; 替换目标为【,\r\n】&#xff0c;替换成空&#xff0c;勾选循环查找和 正则表达式&#xff0c;全部替换即可。 替换后的效果

应该怎么从0搭建一个图像识别系统,如果想考计算机的研究生应该如何准备

搭建一个图像识别系统的过程可以分为以下几个步骤&#xff1a; 数据收集和准备&#xff1a;收集包含标注的图像数据集&#xff0c;并将其准备为训练集和测试集。确保数据集的多样性和代表性。 特征提取和选择&#xff1a;选择适当的特征提取方法&#xff0c;如卷积神经网络&am…

如何配置iSAID_Devkit环境

这个库有点年头了&#xff0c;使用README.md里的conda env create -f environment.yml会说包之间有冲突, 没法安装. 解决方法: 自己建立一个conda env, conda create -n py_isaid pip python3.6.8 记得自己提前定好python版本use gpt to transform environment.yml to setup.p…

mac安装spark

参考&#xff1a;在Mac上安装Spark apache-spark-3.5.1_mac安装spark-CSDN博客 几个需要用到的路径&#xff1a; hadoop的bin目录&#xff1a;/opt/homebrew/Cellar/hadoop/3.4.0/bin spark的conf目录/opt/homebrew/Cellar/apache-spark/3.5.2/libexec/conf spark的bin目录&am…

Elasticsearch之原理详解

简介 ES是使用 Java 编写的一种开源搜索引擎&#xff0c;它在内部使用 Lucene 做索引与搜索&#xff0c;通过对 Lucene 的封装&#xff0c;隐藏了 Lucene 的复杂性&#xff0c;取而代之的提供一套简单一致的 RESTful API 然而&#xff0c;Elasticsearch 不仅仅是 Lucene&#…

SpringCloud Alibaba】(十三)学习 RocketMQ 消息队列

目录 1、MQ 使用场景与选型对比1.1、MQ 的使用场景1.2、引入 MQ 后的注意事项1.3、MQ 选型对比 2、下载、安装 RocketMQ 及 RocketMQ 控制台2.1、下载安装 RocketMQ2.2、测试 RocketMQ 环境2.3、RocketMQ 控制台【图形化管理控制台】2.3.1、下载、安装2.3.2、验证 RocketMQ 控制…

day-49 使数组中所有元素相等的最小操作数

思路 第一个数和最后一个数要变为一致&#xff0c;需要操作n-1次&#xff0c;然后第二个数和倒数第二个数要操作n-3次 解题过程 以此类推即可得出答案 Code class Solution {public int minOperations(int n) {int ans0;int t(n-1);while(t>0){anst;t-2;}return ans;} }作…

String核心设计模式——建造者模式

目录 建造者模式 优点 缺点 使用场景 结构 步骤 1 Item.java Packing.java 步骤 2 Wrapper.java Bottle.java 步骤 3 Burger.java ColdDrink.java 步骤 4 VegBurger.java ChickenBurger.java Coke.java Pepsi.java 步骤 5 Meal.java 步骤 6 MealBuilder…

Proteus 仿真设计:开启电子工程创新之门

摘要&#xff1a; 本文详细介绍了 Proteus 仿真软件在电子工程领域的广泛应用。从 Proteus 的功能特点、安装与使用方法入手&#xff0c;深入探讨了其在电路设计、单片机系统仿真、PCB 设计等方面的强大优势。通过具体的案例分析&#xff0c;展示了如何利用 Proteus 进行高效的…