k8s1@k8s1:~$ ./kk create cluster -f config-sample.yaml
_ __ _ _ __
| | / / | | | | / /
| |/ / _ _| |__ ___| |/ / ___ _ _
| \| | | | '_ \ / _ \ \ / _ \ | | |
| |\ \ |_| | |_) | __/ |\ \ __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
__/ |
|___/
13:11:55 UTC [GreetingsModule] Greetings
13:11:57 UTC message: [master]
Greetings, KubeKey!
13:11:58 UTC message: [node2]
Greetings, KubeKey!
13:11:59 UTC message: [node1]
Greetings, KubeKey!
13:11:59 UTC success: [master]
13:11:59 UTC success: [node2]
13:11:59 UTC success: [node1]
13:11:59 UTC [NodePreCheckModule] A pre-check on nodes
13:12:00 UTC success: [node2]
13:12:00 UTC success: [node1]
13:12:00 UTC success: [master]
13:12:00 UTC [ConfirmModule] Display confirmation form
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| master | y | y | y | y | y | y | | y | | | | | | | UTC 13:12:00 |
| node1 | y | y | y | y | y | y | | y | | | | | | | UTC 13:12:00 |
| node2 | y | y | y | y | y | y | | y | | | | | | | UTC 13:12:00 |
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
13:12:35 UTC success: [LocalHost]
13:12:35 UTC [NodeBinariesModule] Download installation binaries
13:12:35 UTC message: [localhost]
downloading amd64 kubeadm v1.23.10 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 43.1M 100 43.1M 0 0 1012k 0 0:00:43 0:00:43 --:--:-- 983k
13:13:20 UTC message: [localhost]
downloading amd64 kubelet v1.23.10 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 118M 100 118M 0 0 1022k 0 0:01:58 0:01:58 --:--:-- 1072k
13:15:21 UTC message: [localhost]
downloading amd64 kubectl v1.23.10 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 44.4M 100 44.4M 0 0 1031k 0 0:00:44 0:00:44 --:--:-- 1185k
13:16:07 UTC message: [localhost]
downloading amd64 helm v3.9.0 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 44.0M 100 44.0M 0 0 1021k 0 0:00:44 0:00:44 --:--:-- 1028k
13:16:52 UTC message: [localhost]
downloading amd64 kubecni v0.9.1 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 37.9M 100 37.9M 0 0 1041k 0 0:00:37 0:00:37 --:--:-- 1194k
13:17:30 UTC message: [localhost]
downloading amd64 crictl v1.24.0 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 13.8M 100 13.8M 0 0 1080k 0 0:00:13 0:00:13 --:--:-- 1218k
13:17:43 UTC message: [localhost]
downloading amd64 etcd v3.4.13 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 16.5M 100 16.5M 0 0 1034k 0 0:00:16 0:00:16 --:--:-- 1085k
13:18:00 UTC message: [localhost]
downloading amd64 docker 20.10.8 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 58.1M 100 58.1M 0 0 6954k 0 0:00:08 0:00:08 --:--:-- 7063k
13:18:10 UTC success: [LocalHost]
13:18:10 UTC [ConfigureOSModule] Get OS release
13:18:10 UTC success: [node1]
13:18:10 UTC success: [master]
13:18:10 UTC success: [node2]
13:18:10 UTC [ConfigureOSModule] Prepare to init OS
13:18:11 UTC success: [node1]
13:18:11 UTC success: [node2]
13:18:11 UTC success: [master]
13:18:11 UTC [ConfigureOSModule] Generate init os script
13:18:12 UTC success: [node1]
13:18:12 UTC success: [node2]
13:18:12 UTC success: [master]
13:18:12 UTC [ConfigureOSModule] Exec init os script
13:18:15 UTC stdout: [node1]
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
13:18:15 UTC stdout: [node2]
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
13:18:16 UTC stdout: [master]
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
13:18:16 UTC success: [node1]
13:18:16 UTC success: [node2]
13:18:16 UTC success: [master]
13:18:16 UTC [ConfigureOSModule] configure the ntp server for each node
13:18:16 UTC skipped: [node2]
13:18:16 UTC skipped: [master]
13:18:16 UTC skipped: [node1]
13:18:16 UTC [KubernetesStatusModule] Get kubernetes cluster status
13:18:17 UTC success: [master]
13:18:17 UTC [InstallContainerModule] Sync docker binaries
13:18:30 UTC success: [node1]
13:18:30 UTC success: [node2]
13:18:30 UTC success: [master]
13:18:30 UTC [InstallContainerModule] Generate docker service
13:18:30 UTC success: [node1]
13:18:30 UTC success: [master]
13:18:30 UTC success: [node2]
13:18:30 UTC [InstallContainerModule] Generate docker config
13:18:30 UTC success: [node1]
13:18:30 UTC success: [node2]
13:18:30 UTC success: [master]
13:18:30 UTC [InstallContainerModule] Enable docker
13:18:49 UTC success: [node1]
13:18:49 UTC success: [node2]
13:18:49 UTC success: [master]
13:18:49 UTC [InstallContainerModule] Add auths to container runtime
13:18:49 UTC skipped: [node1]
13:18:49 UTC skipped: [master]
13:18:49 UTC skipped: [node2]
13:18:49 UTC [PullModule] Start to pull images on all nodes
13:18:49 UTC message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
13:18:49 UTC message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
13:18:49 UTC message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
13:18:50 UTC message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.10
13:18:50 UTC message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.10
13:18:50 UTC message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.10
13:19:00 UTC message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.10
13:19:00 UTC message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
13:19:01 UTC message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
13:19:05 UTC message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
13:19:05 UTC message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
13:19:09 UTC message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.10
13:19:14 UTC message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.10
13:19:15 UTC message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
13:19:16 UTC message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
13:19:34 UTC message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
13:19:37 UTC message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
13:19:37 UTC message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
13:19:40 UTC message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
13:19:50 UTC message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
13:20:07 UTC message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
13:20:07 UTC message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
13:20:11 UTC message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
13:20:29 UTC message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
13:20:29 UTC message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
13:20:35 UTC message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
13:20:55 UTC message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
13:20:58 UTC success: [node1]
13:20:58 UTC success: [node2]
13:20:58 UTC success: [master]
13:20:58 UTC [ETCDPreCheckModule] Get etcd status
13:20:58 UTC success: [master]
13:20:58 UTC [CertsModule] Fetch etcd certs
13:20:58 UTC success: [master]
13:20:58 UTC [CertsModule] Generate etcd Certs
[certs] Generating "ca" certificate and key
[certs] admin-master serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local kubesphere.k8s.local localhost master node1 node2] and IPs [127.0.0.1 ::1 192.168.2.206 192.168.2.177 192.168.2.203]
[certs] member-master serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local kubesphere.k8s.local localhost master node1 node2] and IPs [127.0.0.1 ::1 192.168.2.206 192.168.2.177 192.168.2.203]
[certs] node-master serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local kubesphere.k8s.local localhost master node1 node2] and IPs [127.0.0.1 ::1 192.168.2.206 192.168.2.177 192.168.2.203]
13:21:01 UTC success: [LocalHost]
13:21:01 UTC [CertsModule] Synchronize certs file
13:21:02 UTC success: [master]
13:21:02 UTC [CertsModule] Synchronize certs file to master
13:21:02 UTC skipped: [master]
13:21:02 UTC [InstallETCDBinaryModule] Install etcd using binary
13:21:04 UTC success: [master]
13:21:04 UTC [InstallETCDBinaryModule] Generate etcd service
13:21:04 UTC success: [master]
13:21:04 UTC [InstallETCDBinaryModule] Generate access address
13:21:04 UTC success: [master]
13:21:04 UTC [ETCDConfigureModule] Health check on exist etcd
13:21:04 UTC skipped: [master]
13:21:04 UTC [ETCDConfigureModule] Generate etcd.env config on new etcd
13:21:04 UTC success: [master]
13:21:04 UTC [ETCDConfigureModule] Refresh etcd.env config on all etcd
13:21:05 UTC success: [master]
13:21:05 UTC [ETCDConfigureModule] Restart etcd
13:21:09 UTC stdout: [master]
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
13:21:09 UTC success: [master]
13:21:09 UTC [ETCDConfigureModule] Health check on all etcd
13:21:09 UTC success: [master]
13:21:09 UTC [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd
13:21:10 UTC success: [master]
13:21:10 UTC [ETCDConfigureModule] Health check on all etcd
13:21:10 UTC success: [master]
13:21:10 UTC [ETCDBackupModule] Backup etcd data regularly
13:21:10 UTC success: [master]
13:21:10 UTC [ETCDBackupModule] Generate backup ETCD service
13:21:10 UTC success: [master]
13:21:10 UTC [ETCDBackupModule] Generate backup ETCD timer
13:21:10 UTC success: [master]
13:21:10 UTC [ETCDBackupModule] Enable backup etcd service
13:21:11 UTC success: [master]
13:21:11 UTC [InstallKubeBinariesModule] Synchronize kubernetes binaries
13:21:48 UTC success: [node1]
13:21:48 UTC success: [master]
13:21:48 UTC success: [node2]
13:21:48 UTC [InstallKubeBinariesModule] Synchronize kubelet
13:21:48 UTC success: [node1]
13:21:48 UTC success: [node2]
13:21:48 UTC success: [master]
13:21:48 UTC [InstallKubeBinariesModule] Generate kubelet service
13:21:49 UTC success: [node1]
13:21:49 UTC success: [node2]
13:21:49 UTC success: [master]
13:21:49 UTC [InstallKubeBinariesModule] Enable kubelet service
13:21:52 UTC success: [node2]
13:21:52 UTC success: [node1]
13:21:52 UTC success: [master]
13:21:52 UTC [InstallKubeBinariesModule] Generate kubelet env
13:21:52 UTC success: [node1]
13:21:52 UTC success: [master]
13:21:52 UTC success: [node2]
13:21:52 UTC [InitKubernetesModule] Generate kubeadm config
13:21:53 UTC success: [master]
13:21:53 UTC [InitKubernetesModule] Init cluster using kubeadm
13:22:30 UTC stdout: [master]
W0407 13:21:53.232803 5651 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.23.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local kubesphere.k8s.local localhost master master.cluster.local node1 node1.cluster.local node2 node2.cluster.local] and IPs [10.233.0.1 192.168.2.206 127.0.0.1 192.168.2.177 192.168.2.203]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.505779 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 71j95c.vwt766fyex29car8
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join kubesphere.k8s.local:6443 --token 71j95c.vwt766fyex29car8 \
--discovery-token-ca-cert-hash sha256:57fac9b5408266f709d667f092e5fb7e140e56480e9d2d10796007af241202a8 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join kubesphere.k8s.local:6443 --token 71j95c.vwt766fyex29car8 \
--discovery-token-ca-cert-hash sha256:57fac9b5408266f709d667f092e5fb7e140e56480e9d2d10796007af241202a8
13:22:30 UTC success: [master]
13:22:30 UTC [InitKubernetesModule] Copy admin.conf to ~/.kube/config
13:22:30 UTC success: [master]
13:22:30 UTC [InitKubernetesModule] Remove master taint
13:22:30 UTC skipped: [master]
13:22:30 UTC [InitKubernetesModule] Add worker label
13:22:30 UTC skipped: [master]
13:22:30 UTC [ClusterDNSModule] Generate coredns service
13:22:31 UTC success: [master]
13:22:31 UTC [ClusterDNSModule] Override coredns service
13:22:32 UTC stdout: [master]
service "kube-dns" deleted
13:22:35 UTC stdout: [master]
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
13:22:35 UTC success: [master]
13:22:35 UTC [ClusterDNSModule] Generate nodelocaldns
13:22:35 UTC success: [master]
13:22:35 UTC [ClusterDNSModule] Deploy nodelocaldns
13:22:35 UTC stdout: [master]
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
13:22:35 UTC success: [master]
13:22:35 UTC [ClusterDNSModule] Generate nodelocaldns configmap
13:22:36 UTC success: [master]
13:22:36 UTC [ClusterDNSModule] Apply nodelocaldns configmap
13:22:37 UTC stdout: [master]
configmap/nodelocaldns created
13:22:37 UTC success: [master]
13:22:37 UTC [KubernetesStatusModule] Get kubernetes cluster status
13:22:37 UTC stdout: [master]
v1.23.10
13:22:37 UTC stdout: [master]
master v1.23.10 [map[address:192.168.2.206 type:InternalIP] map[address:master type:Hostname]]
13:22:44 UTC stdout: [master]
I0407 13:22:41.146907 6828 version.go:255] remote version is much newer: v1.26.3; falling back to: stable-1.23
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
2f289571fc7a08495750bd040c609d138754bdb3b691e110dfc62d9e9a983a33
13:22:44 UTC stdout: [master]
secret/kubeadm-certs patched
13:22:45 UTC stdout: [master]
secret/kubeadm-certs patched
13:22:45 UTC stdout: [master]
secret/kubeadm-certs patched
13:22:45 UTC stdout: [master]
vbbjzr.kxrrp9i7ttjop1q5
13:22:45 UTC success: [master]
13:22:45 UTC [JoinNodesModule] Generate kubeadm config
13:22:46 UTC skipped: [master]
13:22:46 UTC success: [node1]
13:22:46 UTC success: [node2]
13:22:46 UTC [JoinNodesModule] Join control-plane node
13:22:46 UTC skipped: [master]
13:22:46 UTC [JoinNodesModule] Join worker node
13:22:54 UTC stdout: [node2]
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0407 13:22:47.815262 4922 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0407 13:22:47.827002 4922 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
13:22:54 UTC stdout: [node1]
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0407 13:22:47.919379 4966 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0407 13:22:47.929063 4966 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
13:22:54 UTC success: [node2]
13:22:54 UTC success: [node1]
13:22:54 UTC [JoinNodesModule] Copy admin.conf to ~/.kube/config
13:22:54 UTC skipped: [master]
13:22:54 UTC [JoinNodesModule] Remove master taint
13:22:54 UTC skipped: [master]
13:22:54 UTC [JoinNodesModule] Add worker label to master
13:22:54 UTC skipped: [master]
13:22:54 UTC [JoinNodesModule] Synchronize kube config to worker
13:22:55 UTC success: [node1]
13:22:55 UTC success: [node2]
13:22:55 UTC [JoinNodesModule] Add worker label to worker
13:22:55 UTC stdout: [node2]
node/node2 labeled
13:22:55 UTC stdout: [node1]
node/node1 labeled
13:22:55 UTC success: [node2]
13:22:55 UTC success: [node1]
13:22:55 UTC [DeployNetworkPluginModule] Generate calico
13:22:56 UTC success: [master]
13:22:56 UTC [DeployNetworkPluginModule] Deploy calico
13:22:58 UTC stdout: [master]
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
13:22:58 UTC success: [master]
13:22:58 UTC [ConfigureKubernetesModule] Configure kubernetes
13:22:58 UTC success: [node2]
13:22:58 UTC success: [master]
13:22:58 UTC success: [node1]
13:22:58 UTC [ChownModule] Chown user $HOME/.kube dir
13:22:59 UTC success: [node1]
13:22:59 UTC success: [node2]
13:22:59 UTC success: [master]
13:22:59 UTC [AutoRenewCertsModule] Generate k8s certs renew script
13:23:00 UTC success: [master]
13:23:00 UTC [AutoRenewCertsModule] Generate k8s certs renew service
13:23:01 UTC success: [master]
13:23:01 UTC [AutoRenewCertsModule] Generate k8s certs renew timer
13:23:02 UTC success: [master]
13:23:02 UTC [AutoRenewCertsModule] Enable k8s certs renew service
13:23:03 UTC success: [master]
13:23:03 UTC [SaveKubeConfigModule] Save kube config as a configmap
13:23:05 UTC success: [LocalHost]
13:23:05 UTC [AddonsModule] Install addons
13:23:05 UTC success: [LocalHost]
13:23:05 UTC [DeployStorageClassModule] Generate OpenEBS manifest
13:23:05 UTC success: [master]
13:23:05 UTC [DeployStorageClassModule] Deploy OpenEBS as cluster default StorageClass
13:23:19 UTC success: [master]
13:23:19 UTC [DeployKubeSphereModule] Generate KubeSphere ks-installer crd manifests
13:23:20 UTC success: [master]
13:23:20 UTC [DeployKubeSphereModule] Apply ks-installer
13:23:28 UTC stdout: [master]
namespace/kubesphere-system created
serviceaccount/ks-installer created
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
13:23:28 UTC success: [master]
13:23:28 UTC [DeployKubeSphereModule] Add config to ks-installer manifests
13:23:28 UTC success: [master]
13:23:28 UTC [DeployKubeSphereModule] Create the kubesphere namespace
13:23:30 UTC success: [master]
13:23:30 UTC [DeployKubeSphereModule] Setup ks-installer config
13:23:31 UTC stdout: [master]
secret/kube-etcd-client-certs created
13:23:31 UTC success: [master]
13:23:31 UTC [DeployKubeSphereModule] Apply ks-installer
13:23:34 UTC stdout: [master]
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
13:23:34 UTC success: [master]
Please wait for the installation to complete: >>--->