k3s单机环境搭建(飞腾+麒麟)
- k3s介绍
- 环境信息
- k3s部署
- 运行k3s安装脚本
- 配置镜像加速
- 安装kubernetes-dashboard
- 部署kubernetes-dashboard
- 配置RBAC
- 访问kubernetes-dashboard
- 安装监控组件
- 安装kube-prometheus
- 配置数据持久化
- 访问grafana
k3s介绍
k3s是rancher推出的一个轻量级k8s产品,相比于k8s,它具有安装简单、资源消耗较少等优势。
环境信息
cpu:ft2000+ 64核
内存: 256G
操作系统: kylin v10 server sp1
k3s部署
运行k3s安装脚本
K3s 提供了一个安装脚本,可以方便的在 systemd 或 openrc 的系统上将其作为服务安装。这个脚本可以在 https://get.k3s.io 获得。要使用这种方法安装 K3s,只需运行以下命令:
systemctl stop firewalld #关闭防火墙
systemctl disable firewalld
swapoff -a # 临时禁用swap
curl -sfL https://rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -
该脚本在kylin v10 下运行时会由于selinux配置而失败,可以配置INSTALL_K3S_SELINUX_WARN=false跳过k3s selinux配置
脚本运行完成后,查看集群状态
kubectl cluster-info
kubectl get node
[root@localhost ~]# kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
[root@localhost ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
localhost.localdomain Ready control-plane,master 25h v1.24.4+k3s1
配置镜像加速
cat >/etc/rancher/k3s/registries.yaml <<EOF
mirrors:
"docker.io":
endpoint:
- "https://7bezldxe.mirror.aliyuncs.com"
- "https://registry-1.docker.io"
EOF
重启k3s
systemctl restart k3s
安装kubernetes-dashboard
部署kubernetes-dashboard
GITHUB_URL=https://github.com/kubernetes/dashboard/releases
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml
配置RBAC
创建admin-user
cat <<EOF |kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kubernetes-dashboard
EOF
绑定cluster-admin角色
cat <<EOF |kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: admin-user
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:- kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard
EOF
访问kubernetes-dashboard
配置nodePort方式访问
kubectl patch service kubernetes-dashboard -p '{"spec":{"clusterType":"NodePort"}}' -n kubernetes-dashboard
kubectl get svc kubernetes-dashboard -n kubernetes-dashboard #查看分配的nodeport端口
获取访问TOKEN
kubectl -n kubernetes-dashboard create token admin-user
界面效果图
安装监控组件
安装kube-prometheus
git clone https://github.com/prometheus-operator/kube-prometheus.git --depth=1 -b v0.11.0
cd kube-prometheus/manifests
kubectl apply -f setup/
kubectl apply -f ./
- 有个crd会提示报错,单独kubectl apply -f 那个文件即可
- 部分镜像拉取失败的,去docker hub哪到对应arm64版本的镜像替换即可
配置数据持久化
修改prometheus-prometheus.yaml文件,末尾加以下内容
……storage:volumeClaimTemplate:spec:resources:requests:storage: 50Gi
修改生效
kubectl apply -f prometheus-prometheus.yaml
发现prometheus容器一直pending,由于k3s默认的local-path provisioner配置对麒麟环境有些不兼容,导致pv创建不出来,修改configmap local-path-config 配置
kubectl edit cm local-path-config -n kube-system
……
helperPod.yaml: |-apiVersion: v1kind: Podmetadata:name: helper-podspec:containers:- name: helper-podimage: alpine #仅有此处改动,原本的busybox镜像替换成alpine镜像
……
kubectl rollout restart deployment/local-path-provisioner -n kube-system #重启local-path-provisioner服务
访问grafana
配置nodeport方式访问
kubectl patch service grafana -p '{"spec":{"clusterType":"NodePort"}}' -n monitoring
kubectl get svc grafana -n monitoring #查看分配的nodeport端口
删除grafana的网络策略
kubectl delete -f grafana-networkPolicy.yaml
界面效果图