root@ubuntu:/home/ubuntu/k8s# kubeadm init --config kubeconfig.yml --v=5
I0623 15:19:11.938930 37878 initconfiguration.go:200] loading configuration from “kubeconfig.yml”
W0623 15:19:11.944626 37878 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.4
[preflight] Running pre-flight checks
I0623 15:19:11.945633 37878 checks.go:577] validating Kubernetes and kubeadm version
I0623 15:19:11.945850 37878 checks.go:166] validating if the firewall is enabled and active
I0623 15:19:11.960529 37878 checks.go:201] validating availability of port 6443
I0623 15:19:11.960893 37878 checks.go:201] validating availability of port 10259
I0623 15:19:11.961056 37878 checks.go:201] validating availability of port 10257
I0623 15:19:11.961190 37878 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0623 15:19:11.961294 37878 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0623 15:19:11.961370 37878 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0623 15:19:11.961438 37878 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0623 15:19:11.961490 37878 checks.go:432] validating if the connectivity type is via proxy or direct
I0623 15:19:11.961565 37878 checks.go:471] validating http connectivity to first IP address in the CIDR
I0623 15:19:11.961688 37878 checks.go:471] validating http connectivity to first IP address in the CIDR
I0623 15:19:11.961751 37878 checks.go:102] validating the container runtime
I0623 15:19:12.032291 37878 checks.go:128] validating if the service is enabled and active
[WARNING Service-Docker]: docker service is not enabled, please run ‘systemctl enable docker.service’
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0623 15:19:12.145445 37878 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0623 15:19:12.145564 37878 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0623 15:19:12.145807 37878 checks.go:649] validating whether swap is enabled or not
I0623 15:19:12.146013 37878 checks.go:376] validating the presence of executable conntrack
I0623 15:19:12.146131 37878 checks.go:376] validating the presence of executable ip
I0623 15:19:12.146167 37878 checks.go:376] validating the presence of executable iptables
I0623 15:19:12.146388 37878 checks.go:376] validating the presence of executable mount
I0623 15:19:12.146486 37878 checks.go:376] validating the presence of executable nsenter
I0623 15:19:12.146512 37878 checks.go:376] validating the presence of executable ebtables
I0623 15:19:12.146634 37878 checks.go:376] validating the presence of executable ethtool
I0623 15:19:12.146659 37878 checks.go:376] validating the presence of executable socat
I0623 15:19:12.146682 37878 checks.go:376] validating the presence of executable tc
I0623 15:19:12.146845 37878 checks.go:376] validating the presence of executable touch
I0623 15:19:12.146913 37878 checks.go:520] running all checks
I0623 15:19:12.253470 37878 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0623 15:19:12.253958 37878 checks.go:618] validating kubelet version
I0623 15:19:12.346117 37878 checks.go:128] validating if the service is enabled and active
I0623 15:19:12.360103 37878 checks.go:201] validating availability of port 10250
I0623 15:19:12.360925 37878 checks.go:201] validating availability of port 2379
I0623 15:19:12.361165 37878 checks.go:201] validating availability of port 2380
I0623 15:19:12.361306 37878 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
I0623 15:19:12.404868 37878 checks.go:838] image exists: k8s.gcr.io/kube-apiserver:v1.18.4
I0623 15:19:12.468744 37878 checks.go:838] image exists: k8s.gcr.io/kube-controller-manager:v1.18.4
I0623 15:19:12.521984 37878 checks.go:838] image exists: k8s.gcr.io/kube-scheduler:v1.18.4
I0623 15:19:12.583021 37878 checks.go:838] image exists: k8s.gcr.io/kube-proxy:v1.18.4
I0623 15:19:12.634396 37878 checks.go:838] image exists: k8s.gcr.io/pause:3.2
I0623 15:19:12.719198 37878 checks.go:838] image exists: k8s.gcr.io/etcd:3.4.3-0
I0623 15:19:12.762239 37878 checks.go:838] image exists: k8s.gcr.io/coredns:1.6.7
I0623 15:19:12.762462 37878 kubelet.go:64] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder “/etc/kubernetes/pki”
I0623 15:19:13.411341 37878 certs.go:103] creating a new certificate authority for ca
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [ubuntu kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [192.168.0.1 1.2.3.4]
[certs] Generating “apiserver-kubelet-client” certificate and key
I0623 15:19:14.112386 37878 certs.go:103] creating a new certificate authority for front-proxy-ca
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
I0623 15:19:14.483629 37878 certs.go:103] creating a new certificate authority for etcd-ca
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [ubuntu localhost] and IPs [1.2.3.4 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ubuntu localhost] and IPs [1.2.3.4 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
I0623 15:19:15.730621 37878 certs.go:69] creating new public/private key files for signing service account users
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
I0623 15:19:16.156190 37878 kubeconfig.go:79] creating kubeconfig file for admin.conf
[kubeconfig] Writing “admin.conf” kubeconfig file
I0623 15:19:16.457245 37878 kubeconfig.go:79] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing “kubelet.conf” kubeconfig file
I0623 15:19:16.699880 37878 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
I0623 15:19:16.953342 37878 kubeconfig.go:79] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
I0623 15:19:17.574498 37878 manifests.go:91] [control-plane] getting StaticPodSpecs
I0623 15:19:17.575073 37878 manifests.go:104] [control-plane] adding volume “ca-certs” for component “kube-apiserver”
I0623 15:19:17.575331 37878 manifests.go:104] [control-plane] adding volume “etc-ca-certificates” for component “kube-apiserver”
I0623 15:19:17.575534 37878 manifests.go:104] [control-plane] adding volume “etc-pki” for component “kube-apiserver”
I0623 15:19:17.575695 37878 manifests.go:104] [control-plane] adding volume “k8s-certs” for component “kube-apiserver”
I0623 15:19:17.576094 37878 manifests.go:104] [control-plane] adding volume “usr-local-share-ca-certificates” for component “kube-apiserver”
I0623 15:19:17.576336 37878 manifests.go:104] [control-plane] adding volume “usr-share-ca-certificates” for component “kube-apiserver”
I0623 15:19:17.592649 37878 manifests.go:121] [control-plane] wrote static Pod manifest for component “kube-apiserver” to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
I0623 15:19:17.593023 37878 manifests.go:91] [control-plane] getting StaticPodSpecs
W0623 15:19:17.593280 37878 manifests.go:225] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
I0623 15:19:17.593784 37878 manifests.go:104] [control-plane] adding volume “ca-certs” for component “kube-controller-manager”
I0623 15:19:17.593929 37878 manifests.go:104] [control-plane] adding volume “etc-ca-certificates” for component “kube-controller-manager”
I0623 15:19:17.594056 37878 manifests.go:104] [control-plane] adding volume “etc-pki” for component “kube-controller-manager”
I0623 15:19:17.594135 37878 manifests.go:104] [control-plane] adding volume “flexvolume-dir” for component “kube-controller-manager”
I0623 15:19:17.594205 37878 manifests.go:104] [control-plane] adding volume “k8s-certs” for component “kube-controller-manager”
I0623 15:19:17.594268 37878 manifests.go:104] [control-plane] adding volume “kubeconfig” for component “kube-controller-manager”
I0623 15:19:17.594324 37878 manifests.go:104] [control-plane] adding volume “usr-local-share-ca-certificates” for component “kube-controller-manager”
I0623 15:19:17.594381 37878 manifests.go:104] [control-plane] adding volume “usr-share-ca-certificates” for component “kube-controller-manager”
I0623 15:19:17.595504 37878 manifests.go:121] [control-plane] wrote static Pod manifest for component “kube-controller-manager” to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[control-plane] Creating static Pod manifest for “kube-scheduler”
I0623 15:19:17.595823 37878 manifests.go:91] [control-plane] getting StaticPodSpecs
W0623 15:19:17.596054 37878 manifests.go:225] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
I0623 15:19:17.596337 37878 manifests.go:104] [control-plane] adding volume “kubeconfig” for component “kube-scheduler”
I0623 15:19:17.596956 37878 manifests.go:121] [control-plane] wrote static Pod manifest for component “kube-scheduler” to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
I0623 15:19:17.597712 37878 local.go:72] [etcd] wrote Static Pod manifest for a local etcd member to “/etc/kubernetes/manifests/etcd.yaml”
I0623 15:19:17.597736 37878 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:timed out waiting for the conditionThis error is likely caused by:- The kubelet is not running- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:- 'systemctl status kubelet'- 'journalctl -xeu kubelet'Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.Here is one example how you may list all Kubernetes containers running in docker:- 'docker ps -a | grep kube | grep -v pause'Once you have found the failing container, you can inspect its logs with:- 'docker logs CONTAINERID'
couldn’t initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:114
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:147
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:147
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
/workspace/anago-v1.18.4-rc.0.49+d809d9abe5c1e3/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357