云原生安全解决方案NeuVector 5.X部署实践

embedded/2024/11/13 15:37:42/

云原生安全解决方案NeuVector 5.X部署实践

NeuVector 5.X支持部署到docker、Rancher、Openshift以及各种公有云平台的Kubernetes集群环境上,支持YAML、Operator、helm等安装方式。本文实践在本地Kubernetes集群上部署NeuVector 5.X。

1. 部署方式概述

YAML方式部署

使用 Kubernetes 来部署单独的管理器、控制器和执行器容器,并确保所有新节点都部署了执行器。NeuVector 需要并支持 Kubernetes 网络插件,例如 flannel、weave 或 calico。

示例文件将部署一个管理器和三个控制器。它将作为 DaemonSet 在每个节点上部署一个执行器。默认情况下,以下示例也会部署到 Master 节点。

有关使用节点标签指定专用的管理器或控制器节点的信息,请参见底部部分。

注意:

由于潜在的会话状态问题,不建议在负载均衡器后面部署(扩展)多个管理器。如果您计划使用 PersistentVolume 声明来存储 NeuVector 配置文件的备份,请参阅“部署 NeuVector 概述”中的“备份/持久数据”部分。

如果部署需要配置负载均衡器,请将下面 yaml 文件中控制台的类型从 NodePort 更改为 LoadBalancer。

NeuVector 镜像在 Docker Hub

镜像存储在 NeuVector Docker Hub registry 中。请使用适当的版本标签来标识管理器、控制器和执行器,而扫描器和更新器的版本则保持为“latest”。例如:

  • neuvector/manager:5.4.0
  • neuvector/controller:5.4.0
  • neuvector/enforcer:5.4.0
  • neuvector/scanner:latest
  • neuvector/updater:latest

请确保在相应的 YAML 文件中更新镜像引用。

可以从 Docker Hub 下载 NeuVector 镜像推送到私有镜像仓库,或直接从docker.io仓库拉取:

[root@harbor neuvector]# cat images.txt
neuvector/manager:5.4.0
neuvector/controller:5.4.0
neuvector/enforcer:5.4.0
neuvector/scanner:latest
neuvector/updater:latest
[root@harbor neuvector]# cat pull_push.sh
while IFS= read -r image; dodocker pull "$image"new_tag="harbor.jdzx.com/${image}" # 请替换为私有镜像仓库地址docker tag "$image" "$new_tag"echo "$new_tag"docker push "$new_tag"
done < images.txt

Helm部署

NeuVector 支持基于 Helm 的部署,其 Helm chart 位于 https://github.com/neuvector/neuvector-helm。如果使用当前的 NeuVector Helm chart(v1.8.9 及以上版本),应对 values.yml 文件进行以下更改:

  • 将注册表更新为 docker.io
  • 将镜像名称/标签更新为上面所示的 Docker Hub 上的适当版本
  • imagePullSecrets 保持为空

Rancher部署

NeuVector 镜像也已镜像到 Rancher registry,以便从 Rancher 部署。有关更多信息,请参见 Rancher 部署部分。请在每次发布后等待几天,以便镜像能被镜像到 Rancher 注册表。

注意:

从 Rancher Manager 2.6.5 及以上版本部署 NeuVector chart时,将从 rancher-mirrored 仓库中拉取,并部署到 cattle-neuvector-system 命名空间。

2. 在Kubernetes上部署NeuVector

本节记录使用YAML方式在Kubernetes上部署NeuVector,使用K8S版本为v1.27.6。

Deploy NeuVector

  1. 创建NeuVector命名空间和必要的服务账户
kubectl create namespace neuvector
kubectl create sa controller -n neuvector
kubectl create sa enforcer -n neuvector
kubectl create sa basic -n neuvector
kubectl create sa updater -n neuvector
kubectl create sa scanner -n neuvector
kubectl create sa registry-adapter -n neuvector
kubectl create sa cert-upgrader -n neuvector
  1. (可选)创建 NeuVector Pod 安全准入(PSA)或 Pod 安全策略(PSP)。

如果您在 Kubernetes 1.25+ 中启用了 Pod 安全准入(PSA,即 Pod 安全标准),或者在 1.25 之前的 Kubernetes 集群中启用了 Pod 安全策略(PSP),请为 NeuVector 添加以下内容(例如,nv_psp.yaml)。

**注意1:**PSP 在 Kubernetes 1.21 中被弃用,并将在 1.25 中完全移除。

**注意2:**Manager 和 Scanner pods 以无用户 ID 运行。如果您的 PSP 规则中包含 Run As User: Rule: MustRunAsNonRoot,请在下面的示例 yaml 中添加以下内容(并使用合适的值替换 ###):

securityContext:runAsUser: ###

对于 Kubernetes 1.25+ 中的 PSA,在启用了 PSA 的集群上部署时,请为 NeuVector 命名空间标记具有特权配置文件。

kubectl label  namespace neuvector "pod-security.kubernetes.io/enforce=privileged"
  1. 为 NeuVector 安全规则创建自定义资源(CRD)。适用于 Kubernetes 1.19+:
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/crd-k8s-1.19.yaml
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/waf-crd-k8s-1.19.yaml
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/dlp-crd-k8s-1.19.yaml
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/com-crd-k8s-1.19.yaml
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/vul-crd-k8s-1.19.yaml
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/admission-crd-k8s-1.19.yaml
  1. 添加读取权限以访问 Kubernetes API。

重要提示

标准的 NeuVector 5.2+ 部署使用的是最低权限的服务账户,而不是默认服务账户。如果您从 5.3 之前的版本进行升级,请参见下文。

注意

如果您正在升级到 5.3.0 及以上版本,请根据您当前的版本运行以下命令:

# 对于5.2.0版本
kubectl delete clusterrole neuvector-binding-nvsecurityrules neuvector-binding-nvadmissioncontrolsecurityrules neuvector-binding-nvdlpsecurityrules neuvector-binding-nvwafsecurityrules# 对于5.2.0之前的版本
kubectl delete clusterrolebinding neuvector-binding-app neuvector-binding-rbac neuvector-binding-admission neuvector-binding-customresourcedefinition neuvector-binding-nvsecurityrules neuvector-binding-view neuvector-binding-nvwafsecurityrules neuvector-binding-nvadmissioncontrolsecurityrules neuvector-binding-nvdlpsecurityrules
kubectl delete rolebinding neuvector-admin -n neuvector

通过以下“创建 clusterrole”命令应用读取权限:

kubectl create clusterrole neuvector-binding-app --verb=get,list,watch,update --resource=nodes,pods,services,namespaces
kubectl create clusterrole neuvector-binding-rbac --verb=get,list,watch --resource=rolebindings.rbac.authorization.k8s.io,roles.rbac.authorization.k8s.io,clusterrolebindings.rbac.authorization.k8s.io,clusterroles.rbac.authorization.k8s.io
kubectl create clusterrolebinding neuvector-binding-app --clusterrole=neuvector-binding-app --serviceaccount=neuvector:controller
kubectl create clusterrolebinding neuvector-binding-rbac --clusterrole=neuvector-binding-rbac --serviceaccount=neuvector:controller
kubectl create clusterrole neuvector-binding-admission --verb=get,list,watch,create,update,delete --resource=validatingwebhookconfigurations,mutatingwebhookconfigurations
kubectl create clusterrolebinding neuvector-binding-admission --clusterrole=neuvector-binding-admission --serviceaccount=neuvector:controller
kubectl create clusterrole neuvector-binding-customresourcedefinition --verb=watch,create,get,update --resource=customresourcedefinitions
kubectl create clusterrolebinding neuvector-binding-customresourcedefinition --clusterrole=neuvector-binding-customresourcedefinition --serviceaccount=neuvector:controller
kubectl create clusterrole neuvector-binding-nvsecurityrules --verb=get,list,delete --resource=nvsecurityrules,nvclustersecurityrules
kubectl create clusterrole neuvector-binding-nvadmissioncontrolsecurityrules --verb=get,list,delete --resource=nvadmissioncontrolsecurityrules
kubectl create clusterrole neuvector-binding-nvdlpsecurityrules --verb=get,list,delete --resource=nvdlpsecurityrules
kubectl create clusterrole neuvector-binding-nvwafsecurityrules --verb=get,list,delete --resource=nvwafsecurityrules
kubectl create clusterrolebinding neuvector-binding-nvsecurityrules --clusterrole=neuvector-binding-nvsecurityrules --serviceaccount=neuvector:controller
kubectl create clusterrolebinding neuvector-binding-view --clusterrole=view --serviceaccount=neuvector:controller
kubectl create clusterrolebinding neuvector-binding-nvwafsecurityrules --clusterrole=neuvector-binding-nvwafsecurityrules --serviceaccount=neuvector:controller
kubectl create clusterrolebinding neuvector-binding-nvadmissioncontrolsecurityrules --clusterrole=neuvector-binding-nvadmissioncontrolsecurityrules --serviceaccount=neuvector:controller
kubectl create clusterrolebinding neuvector-binding-nvdlpsecurityrules --clusterrole=neuvector-binding-nvdlpsecurityrules --serviceaccount=neuvector:controller
kubectl create role neuvector-binding-scanner --verb=get,patch,update,watch --resource=deployments -n neuvector
kubectl create rolebinding neuvector-binding-scanner --role=neuvector-binding-scanner --serviceaccount=neuvector:updater --serviceaccount=neuvector:controller -n neuvector
kubectl create role neuvector-binding-secret --verb=get,list,watch --resource=secrets -n neuvector
kubectl create rolebinding neuvector-binding-secret --role=neuvector-binding-secret --serviceaccount=neuvector:controller --serviceaccount=neuvector:enforcer --serviceaccount=neuvector:scanner --serviceaccount=neuvector:registry-adapter -n neuvector
kubectl create clusterrole neuvector-binding-nvcomplianceprofiles --verb=get,list,delete --resource=nvcomplianceprofiles
kubectl create clusterrolebinding neuvector-binding-nvcomplianceprofiles --clusterrole=neuvector-binding-nvcomplianceprofiles --serviceaccount=neuvector:controller
kubectl create clusterrole neuvector-binding-nvvulnerabilityprofiles --verb=get,list,delete --resource=nvvulnerabilityprofiles
kubectl create clusterrolebinding neuvector-binding-nvvulnerabilityprofiles --clusterrole=neuvector-binding-nvvulnerabilityprofiles --serviceaccount=neuvector:controllerkubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/neuvector-roles-k8s.yamlkubectl create role neuvector-binding-lease --verb=create,get,update --resource=leases -n neuvector
kubectl create rolebinding neuvector-binding-cert-upgrader --role=neuvector-binding-cert-upgrader --serviceaccount=neuvector:cert-upgrader -n neuvector
kubectl create rolebinding neuvector-binding-job-creation --role=neuvector-binding-job-creation --serviceaccount=neuvector:controller -n neuvector
kubectl create rolebinding neuvector-binding-lease --role=neuvector-binding-lease --serviceaccount=neuvector:controller --serviceaccount=neuvector:cert-upgrader -n neuvector
  1. 运行以下命令检查 neuvector/controllerneuvector/updater 服务账户是否已成功添加。
kubectl get ClusterRoleBinding neuvector-binding-app neuvector-binding-rbac neuvector-binding-admission neuvector-binding-customresourcedefinition neuvector-binding-nvsecurityrules neuvector-binding-view neuvector-binding-nvwafsecurityrules neuvector-binding-nvadmissioncontrolsecurityrules neuvector-binding-nvdlpsecurityrules -o wide

示例输出:

NAME                                                ROLE                                                            AGE   USERS   GROUPS   SERVICEACCOUNTS
neuvector-binding-app                               ClusterRole/neuvector-binding-app                               56d                    neuvector/controller
neuvector-binding-rbac                              ClusterRole/neuvector-binding-rbac                              34d                    neuvector/controller
neuvector-binding-admission                         ClusterRole/neuvector-binding-admission                         72d                    neuvector/controller
neuvector-binding-customresourcedefinition          ClusterRole/neuvector-binding-customresourcedefinition          72d                    neuvector/controller
neuvector-binding-nvsecurityrules                   ClusterRole/neuvector-binding-nvsecurityrules                   72d                    neuvector/controller
neuvector-binding-view                              ClusterRole/view                                                72d                    neuvector/controller
neuvector-binding-nvwafsecurityrules                ClusterRole/neuvector-binding-nvwafsecurityrules                72d                    neuvector/controller
neuvector-binding-nvadmissioncontrolsecurityrules   ClusterRole/neuvector-binding-nvadmissioncontrolsecurityrules   72d                    neuvector/controller
neuvector-binding-nvdlpsecurityrules                ClusterRole/neuvector-binding-nvdlpsecurityrules                72d                    neuvector/controller</code>

还有如下命令:

kubectl get RoleBinding neuvector-binding-scanner neuvector-binding-cert-upgrader neuvector-binding-job-creation neuvector-binding-lease neuvector-binding-secret -n neuvector -o wide

示例输出:

NAME                              ROLE                                   AGE    USERS   GROUPS   SERVICEACCOUNTS
neuvector-binding-scanner         Role/neuvector-binding-scanner         8m8s                    neuvector/controller, neuvector/updater
neuvector-binding-cert-upgrader   Role/neuvector-binding-cert-upgrader   8m8s                    neuvector/cert-upgrader
neuvector-binding-job-creation    Role/neuvector-binding-job-creation    8m8s                    neuvector/controller
neuvector-binding-lease           Role/neuvector-binding-lease           8m8s                    neuvector/controller, neuvector/cert-upgrader
neuvector-binding-secret          Role/neuvector-binding-secret          8m8s                    neuvector/controller, neuvector/enforcer, neuvector/scanner, neuvector/registry-adapter
  1. (可选)创建联邦主控和/或远程多集群管理服务。如果您计划在 NeuVector 中使用多集群管理功能,则一个集群必须部署联邦主控服务,而每个远程集群必须拥有联邦工作者服务。为了灵活起见,您可以选择在每个集群上同时部署主控和工作者服务,这样任何集群都可以成为主控或远程集群。

联邦集群管理服务YAML:

apiVersion: v1
kind: Service
metadata:name: neuvector-service-controller-fed-masternamespace: neuvector
spec:ports:- port: 11443name: fedprotocol: TCPtype: LoadBalancerselector:app: neuvector-controller-pod---apiVersion: v1
kind: Service
metadata:name: neuvector-service-controller-fed-workernamespace: neuvector
spec:ports:- port: 10443name: fedprotocol: TCPtype: LoadBalancerselector:app: neuvector-controller-pod

创建服务:

kubectl create -f nv_master_worker.yaml
  1. 使用预设版本命令创建主要的 NeuVector 服务和 pod,或者修改下面的示例 yaml 文件。预设版本将为 NeuVector 控制台调用一个 LoadBalancer。如果使用下面的示例 yaml 文件,请替换 yaml 文件中管理器、控制器和执行器镜像引用的镜像名称和 <version> 标签。同时,根据您的部署环境进行其他必要的修改(例如,管理器访问的 LoadBalancer/NodePort/Ingress 等)。
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/neuvector-k8s.yaml

或者根据需求修改yaml示例文件:

# 修改webui的service类型为NodePort
...
---apiVersion: v1
kind: Service
metadata:name: neuvector-service-webuinamespace: neuvector
spec:ports:- port: 8443name: managerprotocol: TCP
#  type: LoadBalancertype: NodePortselector:app: neuvector-manager-pod---
...# 应用YAML文件
kubectl create -f neuvector.yaml

确认相关pod运行正常,示例输出如下:

# pod
[root@k8s-master1 neuvector]# kubectl get pod -n neuvector
NAME                                       READY   STATUS      RESTARTS   AGE
neuvector-controller-pod-9558f8954-28478   1/1     Running     0          10h
neuvector-controller-pod-9558f8954-28jnm   1/1     Running     0          10h
neuvector-controller-pod-9558f8954-lw8cm   1/1     Running     0          10h
neuvector-enforcer-pod-jrfwf               1/1     Running     0          10h
neuvector-enforcer-pod-mvhp9               1/1     Running     0          10h
neuvector-enforcer-pod-rhp7t               1/1     Running     0          10h
neuvector-enforcer-pod-wcnnn               1/1     Running     0          10h
neuvector-manager-pod-7c44bfbd4c-w4pnl     1/1     Running     0          9h
neuvector-scanner-pod-7cfc95b64f-74dr6     1/1     Running     0          72m
neuvector-scanner-pod-7cfc95b64f-76psj     1/1     Running     0          73m
neuvector-updater-pod-28851840-kmk66       0/1     Completed   0          73m# svc
[root@k8s-master1 neuvector]# kubectl get svc -n neuvector
NAME                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
neuvector-service-webui           NodePort    10.97.188.186   <none>        8443:31585/TCP                  10h
neuvector-svc-admission-webhook   ClusterIP   10.109.45.10    <none>        443/TCP                         10h
neuvector-svc-controller          ClusterIP   None            <none>        18300/TCP,18301/TCP,18301/UDP   10h
neuvector-svc-crd-webhook         ClusterIP   10.106.99.188   <none>        443/TCP                         10h

通过https://<public-ip>:8443访问NeuVector控制台,用户名和密码:admin/admin

注意

neuvector.yaml 文件中指定的 NodePort 服务会在所有 Kubernetes 节点上为 NeuVector 管理 Web 控制台端口打开一个随机端口。或者,您也可以使用 LoadBalancer 或 Ingress,使用公共 IP 和默认端口 8443。对于 NodePort,如果有需要,请确保通过防火墙为该端口打开访问权限。

如果您想查看主机节点上开放的端口,可以运行以下命令:

kubectl get svc -n neuvector

输出将类似于:

NAME                          CLUSTER-IP      EXTERNAL-IP   PORT(S)                                          AGE
neuvector-service-webui     10.100.195.99     <nodes>       8443:31585/TCP

如果是NodePort类型的服务,则可以通过https://<public-ip>:<nodeport>访问:

dashboard

PKS 变更(可选)

注意

PKS[VMware Enterprise PKS(Pivotal Container Service)] 已在实际场景中测试,要求在plan/tile中启用特权容器,并更改 Allinone、Controller、Enforcer 的 yaml 文件中的 hostPath,如下所示:

hostPath:path: /var/vcap/sys/run/docker/docker.sock

主节点污点和容忍(可选)

要在节点上调度 Enforcer,所有污点信息必须匹配。可以使用以下命令检查节点(例如主节点)的污点信息:

kubectl get node taintnodename -o yaml

示例输出:

spec:taints:- effect: NoSchedulekey: node-role.kubernetes.io/master# 以下可能有额外的污点信息- effect: NoSchedulekey: mykeyvalue: myvalue

如果存在如上所示的额外污点信息,请将其添加到 yaml 文件的容忍部分:

spec:template:spec:tolerations:- effect: NoSchedulekey: node-role.kubernetes.io/master- effect: NoSchedulekey: node-role.kubernetes.io/control-plane# 如果有如上的额外污点信息,请在此添加。必须匹配污点节点上定义的所有污点信息,否则 Enforcer 无法部署到污点节点上。- effect: NoSchedulekey: mykeyvalue: myvalue

使用节点标签指定 Manager 和 Controller 节点(可选)

为了控制 Manager 和 Controller 部署在特定节点上,可以为每个节点添加标签。将 nodename 替换为相应的节点名称(可以使用 kubectl get nodes 查看)。

注意:默认情况下,Kubernetes 不会在主节点上调度 Pod。

kubectl label nodes nodename nvcontroller=true

然后在 Manager 和 Controller 的部署 yaml 文件中添加 nodeSelector,示例如下:

      - mountPath: /host/cgroupname: cgroup-volreadOnly: truenodeSelector:nvcontroller: "true"restartPolicy: Always

如果该 Controller 节点是一个专用管理节点(不包含要监控的应用容器),并且希望防止 Enforcer 部署到 Controller 节点上,可以在 Enforcer 的 yaml 文件部分中添加 nodeAffinity。示例如下:

  app: neuvector-enforcer-podspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: nvcontrolleroperator: NotInvalues: ["true"]imagePullSecrets:

滚动更新(可选)

Kubernetes、RedHat OpenShift 和 Rancher 等编排工具支持具有可配置策略的滚动更新。可以利用该功能更新 NeuVector 容器。最重要的是确保至少有一个 Controller(或 Allinone)在运行,以免策略、日志和连接数据丢失。请确保容器更新间隔至少为 120 秒,以便新主节点选举和数据在控制器之间同步。

提供的示例部署 yaml 文件已配置滚动更新策略。如果通过 NeuVector Helm chart 进行更新,请拉取最新的 chart 以正确配置新功能(如准入控制),并删除旧的 NeuVector 集群角色和集群角色绑定。如果通过 Kubernetes 更新,可以使用以下示例命令手动更新到新版本。

Kubernetes 滚动更新示例

对于只需更新到新镜像版本的升级,您可以使用以下简单方法。

如果您的 Deployment 或 DaemonSet 已在运行,可以将 yaml 文件更改为新版本,然后应用更新:

kubectl apply -f <yaml file>

从命令行更新到新版本的 NeuVector:

  1. 对于作为 Deployment 的 Controller(Manager 也类似操作):
kubectl set image deployment/neuvector-controller-pod neuvector-controller-pod=neuvector/controller:<version> -n neuvector
  1. 对于任何作为 DaemonSet 的容器:
kubectl set image -n neuvector ds/neuvector-enforcer-pod neuvector-enforcer-pod=neuvector/enforcer:<version>

检查滚动更新的状态:

kubectl rollout status -n neuvector ds/neuvector-enforcer-pod
kubectl rollout status -n neuvector deployment/neuvector-controller-pod

回滚更新:

kubectl rollout undo -n neuvector ds/neuvector-enforcer-pod
kubectl rollout undo -n neuvector deployment/neuvector-controller-pod

在 Kubernetes 中暴露 REST API(可选)

要从 Kubernetes 集群外部访问 REST API,以下是一个示例 yaml 文件:

apiVersion: v1
kind: Service
metadata:name: neuvector-service-restnamespace: neuvector
spec:ports:- port: 10443name: controllerprotocol: TCPtype: LoadBalancerselector:app: neuvector-controller-pod

有关 REST API 的更多信息,请参见自动化部分。

在非特权模式下部署(可选)

以下说明可用于在不使用特权模式容器的情况下部署 NeuVector。控制器已经处于非特权模式,而执法器的部署需要进行更改,相关代码片段如下所示。

执法器(Enforcer):

spec:template:metadata:annotations:container.apparmor.security.beta.kubernetes.io/neuvector-enforcer-pod: unconfined# this line is required to be added if k8s version is pre-v1.19# container.seccomp.security.alpha.kubernetes.io/neuvector-enforcer-pod: unconfinedspec:containers:securityContext:# the following two lines are required for k8s v1.19+. pls comment out both lines if version is pre-1.19. Otherwise, a validating data error message will showseccompProfile:type: Unconfinedcapabilities:add:- SYS_ADMIN- NET_ADMIN- SYS_PTRACE- IPC_LOCK

以下示例是一个完整的部署参考(Kubernetes 1.19+)。

apiVersion: v1
kind: Service
metadata:name: neuvector-svc-crd-webhooknamespace: neuvector
spec:ports:- port: 443targetPort: 30443protocol: TCPname: crd-webhooktype: ClusterIPselector:app: neuvector-controller-pod---apiVersion: v1
kind: Service
metadata:name: neuvector-svc-admission-webhooknamespace: neuvector
spec:ports:- port: 443targetPort: 20443protocol: TCPname: admission-webhooktype: ClusterIPselector:app: neuvector-controller-pod---apiVersion: v1
kind: Service
metadata:name: neuvector-service-webuinamespace: neuvector
spec:ports:- port: 8443name: managerprotocol: TCPtype: LoadBalancer#type: NodePortselector:app: neuvector-manager-pod---apiVersion: v1
kind: Service
metadata:name: neuvector-svc-controllernamespace: neuvector
spec:ports:- port: 18300protocol: "TCP"name: "cluster-tcp-18300"- port: 18301protocol: "TCP"name: "cluster-tcp-18301"- port: 18301protocol: "UDP"name: "cluster-udp-18301"clusterIP: Noneselector:app: neuvector-controller-pod---apiVersion: apps/v1
kind: Deployment
metadata:name: neuvector-manager-podnamespace: neuvector
spec:selector:matchLabels:app: neuvector-manager-podreplicas: 1template:metadata:labels:app: neuvector-manager-podspec:serviceAccountName: basicserviceAccount: basiccontainers:- name: neuvector-manager-podimage: neuvector/manager:5.4.0env:- name: CTRL_SERVER_IPvalue: neuvector-svc-controller.neuvectorrestartPolicy: Always---apiVersion: apps/v1
kind: Deployment
metadata:name: neuvector-controller-podnamespace: neuvector
spec:selector:matchLabels:app: neuvector-controller-podminReadySeconds: 60strategy:type: RollingUpdaterollingUpdate:maxSurge: 1maxUnavailable: 0replicas: 3template:metadata:labels:app: neuvector-controller-podspec:affinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: appoperator: Invalues:- neuvector-controller-podtopologyKey: "kubernetes.io/hostname"serviceAccountName: controllerserviceAccount: controllercontainers:- name: neuvector-controller-podimage: neuvector/controller:5.4.0securityContext:runAsUser: 0readinessProbe:exec:command:- cat- /tmp/readyinitialDelaySeconds: 5periodSeconds: 5env:- name: CLUSTER_JOIN_ADDRvalue: neuvector-svc-controller.neuvector- name: CLUSTER_ADVERTISED_ADDRvalueFrom:fieldRef:fieldPath: status.podIP- name: CLUSTER_BIND_ADDRvalueFrom:fieldRef:fieldPath: status.podIP# - name: CTRL_PERSIST_CONFIG#   value: "1"volumeMounts:# - mountPath: /var/neuvector#   name: nv-share#   readOnly: false- mountPath: /etc/configname: config-volumereadOnly: trueterminationGracePeriodSeconds: 300restartPolicy: Alwaysvolumes:# - name: nv-share#   persistentVolumeClaim:#     claimName: neuvector-data- name: config-volumeprojected:sources:- configMap:name: neuvector-initoptional: true- secret:name: neuvector-initoptional: true- secret:name: neuvector-secretoptional: true---apiVersion: apps/v1
kind: DaemonSet
metadata:name: neuvector-enforcer-podnamespace: neuvector
spec:selector:matchLabels:app: neuvector-enforcer-podupdateStrategy:type: RollingUpdatetemplate:metadata:labels:app: neuvector-enforcer-podannotations:container.apparmor.security.beta.kubernetes.io/neuvector-enforcer-pod: unconfined# Add the following for pre-v1.19# container.seccomp.security.alpha.kubernetes.io/neuvector-enforcer-pod: unconfinedspec:tolerations:- effect: NoSchedulekey: node-role.kubernetes.io/master- effect: NoSchedulekey: node-role.kubernetes.io/control-planehostPID: trueserviceAccountName: enforcerserviceAccount: enforcercontainers:- name: neuvector-enforcer-podimage: neuvector/enforcer:5.4.0securityContext:# the following two lines are required for k8s v1.19+. pls comment out both lines if version is pre-1.19. Otherwise, a validating data error message will showseccompProfile:type: Unconfinedcapabilities:add:- SYS_ADMIN- NET_ADMIN- SYS_PTRACE- IPC_LOCKenv:- name: CLUSTER_JOIN_ADDRvalue: neuvector-svc-controller.neuvector- name: CLUSTER_ADVERTISED_ADDRvalueFrom:fieldRef:fieldPath: status.podIP- name: CLUSTER_BIND_ADDRvalueFrom:fieldRef:fieldPath: status.podIPvolumeMounts:- mountPath: /lib/modulesname: modules-volreadOnly: true# - mountPath: /run/runtime.sock#   name: runtime-sock#   readOnly: true# - mountPath: /host/proc#   name: proc-vol#   readOnly: true# - mountPath: /host/cgroup#   name: cgroup-vol#   readOnly: true- mountPath: /var/nv_debugname: nv-debugreadOnly: falseterminationGracePeriodSeconds: 1200restartPolicy: Alwaysvolumes:- name: modules-volhostPath:path: /lib/modules# - name: runtime-sock#   hostPath:#     path: /var/run/docker.sock#     path: /var/run/containerd/containerd.sock#     path: /run/dockershim.sock#     path: /run/k3s/containerd/containerd.sock#     path: /var/run/crio/crio.sock#     path: /var/vcap/sys/run/docker/docker.sock# - name: proc-vol#   hostPath:#     path: /proc# - name: cgroup-vol#   hostPath:#     path: /sys/fs/cgroup- name: nv-debughostPath:path: /var/nv_debug---apiVersion: apps/v1
kind: Deployment
metadata:name: neuvector-scanner-podnamespace: neuvector
spec:selector:matchLabels:app: neuvector-scanner-podstrategy:type: RollingUpdaterollingUpdate:maxSurge: 1maxUnavailable: 0replicas: 2template:metadata:labels:app: neuvector-scanner-podspec:serviceAccountName: scannerserviceAccount: scannercontainers:- name: neuvector-scanner-podimage: neuvector/scanner:latestimagePullPolicy: Alwaysenv:- name: CLUSTER_JOIN_ADDRvalue: neuvector-svc-controller.neuvectorrestartPolicy: Always---apiVersion: batch/v1
kind: CronJob
metadata:name: neuvector-updater-podnamespace: neuvector
spec:schedule: "0 0 * * *"jobTemplate:spec:template:metadata:labels:app: neuvector-updater-podspec:serviceAccountName: updaterserviceAccount: updatercontainers:- name: neuvector-updater-podimage: neuvector/updater:latestimagePullPolicy: Alwayscommand:- TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`; /usr/bin/curl -kv -X PATCH -H "Authorization:Bearer $TOKEN" -H "Content-Type:application/strategic-merge-patch+json" -d '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"'`date +%Y-%m-%dT%H:%M:%S%z`'"}}}}}' 'https://kubernetes.default/apis/apps/v1/namespaces/neuvector/deployments/neuvector-scanner-pod'restartPolicy: Never

PKS 变更

注意

PKS 已经过现场测试,并要求在计划/模板中启用特权容器,同时需要更改 yaml 文件中的 hostPath,如下所示,适用于 Allinone 和 Enforcer:

      hostPath:path: /var/vcap/sys/run/docker/docker.sock

dater
serviceAccount: updater
containers:
- name: neuvector-updater-pod
image: neuvector/updater:latest
imagePullPolicy: Always
command:
- TOKEN=cat /var/run/secrets/kubernetes.io/serviceaccount/token; /usr/bin/curl -kv -X PATCH -H “Authorization:Bearer $TOKEN” -H “Content-Type:application/strategic-merge-patch+json” -d ‘{“spec”:{“template”:{“metadata”:{“annotations”:{“kubectl.kubernetes.io/restartedAt”:“‘date +%Y-%m-%dT%H:%M:%S%z’”}}}}}’ ‘https://kubernetes.default/apis/apps/v1/namespaces/neuvector/deployments/neuvector-scanner-pod’
restartPolicy: Never


**PKS 变更**> **注意**
>
> PKS 已经过现场测试,并要求在计划/模板中启用特权容器,同时需要更改 yaml 文件中的 `hostPath`,如下所示,适用于 Allinone 和 Enforcer:```bashhostPath:path: /var/vcap/sys/run/docker/docker.sock

http://www.ppmy.cn/embedded/137285.html

相关文章

【前端】JavaScript高级教程:线程机制与事件机制

文章目录 1 进程与线程2 浏览器内核3 定时器引发的思考4 JS是单线程的5 事件循环模型6 H5 Web Workers6.1 Web Workers_测试16.2 worker.js6.3 Web Workers_测试2 1 进程与线程 进程&#xff1a;程序的一次执行&#xff0c;它占有一片独有的内存空间。线程&#xff1a;CPU的基…

Unity3D UI 双击和长按

Unity3D 实现 UI 元素双击和长按功能。 UI 双击和长按 上一篇文章实现了拖拽接口&#xff0c;这篇文章来实现 UI 的双击和长按。 双击 创建脚本 UIDoubleClick.cs&#xff0c;创建一个 Image&#xff0c;并把脚本挂载到它身上。 在脚本中&#xff0c;继承 IPointerClickHa…

第二章springboot核心配置与注解

本章学习的目标 2.1 全局配置文件 全局配置文件能对默认配置值进行修改&#xff0c;下面介绍两种全局配置文件&#xff1a; 2.1.1 application.properties配置文件 springboot项目启动时会自动启动这个配置文件&#xff0c;我们可以在这个文件中配置相关属性&#xff0c;&…

minio 分布式

方案设计 需要5台服务器&#xff0c;一台nginx用作分发请求&#xff0c;4台minio服务器&#xff0c;每个minio服务器上至少2个盘。在这个方法中&#xff0c;我使用了lvm的缓存&#xff0c;在同种固态盘的情况下&#xff0c;可以使读性能提高数倍到十倍&#xff0c;使写性能提高…

音视频入门基础:MPEG2-TS专题(4)——使用工具分析MPEG2-TS传输流

一、引言 有很多工具可以分析MPEG2-TS文件/流&#xff0c;比如Elecard Stream Analyzer、PROMAX TS Analyser、easyice等。下面一一对它们进行简介&#xff08;个人感觉easyice功能更强大一点&#xff09;。 二、Elecard Stream Analyzer 使用Elecard Stream Analyzer工具可以…

【专题】2024年全球生物医药交易报告汇总PDF洞察(附原数据表)

原文链接&#xff1a;https://tecdat.cn/?p38191 在当今复杂多变的全球经济环境下&#xff0c;医药行业正面临着诸多挑战与机遇。2024 年&#xff0c;医药行业的发展态势备受关注。 一方面&#xff0c;全球生物医药交易活跃&#xff0c;2021 - 2023 年的交易中&#xff0c;已…

简单的TCP程序

文章目录 3. TCP程序3.1 接口3.1.1 inet_aton()3.1.2 listen()3.1.3 现在的服务器代码3.1.4 accept()3.1.5 inet_ntop()3.1.6 tcpClient.cc 3.2 并发的 tcpServer3.2.1 多进程版本3.2.2 多线程版本3.2.3 线程池版本 3.3 继续完善3.3.1 增加客户端重连功能3.3.2 守护进程 3. TCP…

ReactOS 4.2 OBJECT_TYPE_INITIALIZERj结构体的实现

Windows内核为新对象类型的定义提供了一个全局的OBJECT_TYPE_INITIALIZER 数据结构,作为需要填写和递交的“申请单”: OBJECT_TYPE_INITIALIZER // // Object Type Initialize for ObCreateObjectType // typedef struct _OBJECT_TYPE_INITIALIZER {USHORT Length;BOOLEAN Us…