In this post we will install a multi-master Kubernetes cluster behind Tailscale VPN.
This scenario can be useful when:
Your Kubernetes nodes are not in the same subnet.
You are building a home-lab system, and the nodes are behind two or more NAT-ted network, or even behind CGNAT.
Your nodes are running in separate data centers, and don't want to publish API ports on the public internet.
You want to access your cluster only from private VPN network.
You want extra security by encrypted connection between nodes.
Or the mixture of above scenarios.
Why Tailscale VPN?
You can use any other VPN solution like Wireguard, OpenVPN, IPSec, etc. But nowadays I think Tailscale is the easiest way to bring up a VPN network.
With a free registration you get 100 device, subnet routers, exit nodes, (Magic)DNS, and so many useful features.
But as I mentioned you can use any other VPN solution, personally I'm using Wireguard in my home-lab system.
Warning
Tailscale assigns IP address from 100.64.0.0/10 range! IP Address Assignment
If you are planning to use Kube-OVN networking don't forget to change the CIDR, because Kube-OVN is also use this subnet!
As I mentioned we will deploy a multi-master Kubernetes cluster:
3 master|worker nodes, without worker nodes. Later additional worker nodes can be added to the cluster, but for the simplicity we won't deploy extra worker nodes.
We need an additional TCP load balancer for the API requests. I prefer HAProxy for this purpose, because it is easy to set up and lightweight.
For this lab I will deploy only one Load Balancer, but if you need HA solution, at least two Load Balancers are needed. This can be achieved by using Keppalived. Or you can use external load balancer like F5. But this demo is not about HA Load balancers, so it is just enough to have only one LB.
Hostname
Role
IP Address
VPN IP Address
kube02-m1
Control Plane Node 1
172.16.1.77
Later
kube02-m2
Control Plane Node 2
172.16.1.78
Later
kube02-m3
Control Plane Node 3
172.16.1.79
Later
kube02-haproxy
HAProxy Load Balancer
172.16.1.80
Later
ansible
Ansible Host
172.16.0.252
---
Note
You don't need the additional Ansible host, if you preparing the OS manually.
Note
You can use one of the kubernetes node for HAProxy, but in this case you need to configure either the HAProxy listen port or --apiserver-bind-port (kubadm init).
The nodes in this test environment are connected each other on the same subnet.
In this post I'm using Ansible to prepare the Debian OSes for Kubernetes installation.
I'm highly recommend to use some kind of automatization tool(s) or scirpt(s) to maintain your infrastructure, especially if you planning to have a bunch of nodes, not just a home-lab.
And if something goes wrong you can start it over in a minute.
yq_url: Yq binary URL. This version of yq will be installed on the hosts.
kube_version: Here you can define which version of Kubernetes you want to install. (kubelet, kubeadm and kubectl)
common_packages: These packages will be installed on the hosts. "Common packages" because usually I install these packages on my VMs, regardless of deploying Docker or Kubernetes.
docker_packages: Packages for installing Docker/Containerd engine.
Ansible host must access the VMs over ssh. Before you run any of the playbooks please enable root login.
For example: sed -i -e 's/^#\(PermitRootLogin \).*/\1 yes/' /etc/ssh/sshd_config and restart sshd daemon.
It is highly recommended to use dedicated ansible user (with sudo right) and ssh key authentication!
And don't forget to accept ssh key by login to the remotes systems before run the playbooks.
If you are using other user than root, you may want to use become: 'yes' option it the plays.
-hosts:pve-kube02name:Installtasks:-name:Run Included Task - Upgrade Ddebianansible.builtin.import_tasks:file:task_allow_release_info_change.yaml-name:Reboot the machineansible.builtin.reboot:
-name:Gather Factshosts:127.0.0.1connection:localtasks:-include_vars:myvars.yml-name:Download Yqansible.builtin.get_url:url:"{{yq_url}}"dest:/tmp/yqmode:'755'-name:Calculate MD5ansible.builtin.stat:path:/tmp/yqchecksum_algorithm:md5register:yq_md5-name:Delete /tmp/yqansible.builtin.file:path:/tmp/yqstate:absent-hosts:pve-kube02name:Installbecome:'yes'tasks:-include_vars:myvars.yml-name:Run the equivalent of "apt-get update" as a separate stepansible.builtin.apt:update_cache:yes-name:Set Fact For YQ md5set_fact:yq_checksum:"{{hostvars['127.0.0.1']['yq_md5'].stat.checksum}}"-name:debugdebug:msg:"MD5hash:{{yq_checksum}}"-name:Ensure a list of packages installedansible.builtin.apt:name:"{{common_packages}}"state:present-name:All done!debug:msg:Packages have been successfully installed-name:Calculate Already Existing jq hashansible.builtin.stat:path:/usr/bin/yqchecksum_algorithm:md5register:exist_yq_md5-name:Print Existing yq md5 hashdebug:msg:"MD5hashofexisting:{{exist_yq_md5.stat.checksum}}"when:exist_yq_md5.stat.exists == true-name:Remove Old Version Of YQansible.builtin.file:path:/usr/bin/yqstate:absentwhen:exist_yq_md5.stat.exists == false or exist_yq_md5.stat.checksum != yq_checksum-name:Download Yqansible.builtin.get_url:url:"{{yq_url}}"dest:/usr/bin/yqmode:'755'when:exist_yq_md5.stat.exists == false or exist_yq_md5.stat.checksum != yq_checksum-name:Fix Vimrcansible.builtin.replace:path:/etc/vim/vimrcregexp:'^"\s?(letg:skip_defaults_vim.*)'replace:'\1'-name:Fix Vimrc 2ansible.builtin.replace:path:/etc/vim/vimrcregexp:'^"\s?(setcompatible.*)'replace:'\1'-name:Fix Vimrc 3ansible.builtin.replace:path:/etc/vim/vimrcregexp:'^"\s?(setbackground).*'replace:'\1=dark'-name:Fix Vimrc 4ansible.builtin.replace:path:/etc/vim/vimrcregexp:'^"\s?(syntaxon).*'replace:'\1'-name:Fix Vimrc 4ansible.builtin.replace:path:/etc/vim/vimrcregexp:'^"\s?(setmouse).*'replace:'\1=c'-name:Allow 'sudo' group to have passwordless sudolineinfile:dest:/etc/sudoersstate:presentregexp:'^%sudo'line:'%sudoALL=(ALL:ALL)NOPASSWD:ALL'validate:visudo -cf %s
-hosts:pve-kube02become:'yes'tasks:-include_vars:myvars.yml-name:determine codeversioncommand:"lsb_release-cs"register:release_output-set_fact:codename:"{{release_output.stdout}}"-name:Run the equivalent of "apt-get update" as a separate stepansible.builtin.apt:update_cache:yes-name:add Docker GPG keyapt_key:url:https://download.docker.com/linux/debian/gpgstate:present-name:add docker repository to aptapt_repository:repo:deb https://download.docker.com/linux/debian "{{ codename }}" stablestate:present-name:add tailscale gpg keyapt_key:url:https://pkgs.tailscale.com/stable/debian/bullseye.noarmor.gpgstate:present-name:add tailscale repository to aptapt_repository:repo:deb https://pkgs.tailscale.com/stable/debian "{{ codename }}" mainstate:present-name:install dockeransible.builtin.apt:name:"{{docker_packages}}"state:presentupdate_cache:yes-name:install tailscaleapt:name:tailscalestate:latest-name:check if docker is started properlyservice:name:dockerstate:startedenabled:yes
-hosts:pve-kube02become:'yes'vars:kubepackages:-kubelet={{ kube_version }}-kubeadm={{ kube_version }}-kubectl={{ kube_version }}tasks:-include_vars:myvars.yml-name:Register architecture (dpkg_output)command:"dpkg--print-architecture"register:dpkg_output-set_fact:arch:"{{dpkg_output.stdout}}"-name:Register lsb_releasecommand:"lsb_release-cs"register:release_output-set_fact:codename:"{{release_output.stdout}}"-name:Add Kubernetes gpg to keyringapt_key:url:https://packages.cloud.google.com/apt/doc/apt-key.gpgstate:present-name:add kubernetes repository to aptapt_repository:repo:deb https://apt.kubernetes.io/ kubernetes-xenial main-name:Disable SWAP since kubernetes can't work with swap enabled (1/2)shell:|swapoff -a-name:Disable SWAP in fstab since kubernetes can't work with swap enabled (2/2)replace:path:/etc/fstabregexp:'^([^#].*?\sswap\s+sw\s+.*)$'replace:'#\1'-name:Enable overlay & br_netfilter moduleansible.builtin.copy:content:|overlaybr_netfilterdest:/etc/modules-load.d/k8s.conf-name:Running modprobeshell:|modprobe overlaymodprobe br_netfilter-name:Set up sysctl /etc/sysctl.d/k8s.confansible.builtin.copy:content:|net.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-ip6tables = 1net.ipv4.ip_forward = 1dest:/etc/sysctl.d/k8s.conf-name:Add the overlay modulecommunity.general.modprobe:name:overlaystate:present-name:Add the br_netfilter modulecommunity.general.modprobe:name:br_netfilterstate:present-name:sysctlansible.builtin.shell:"sysctl--system"-name:Generate default containerd configansible.builtin.shell:"containerdconfigdefault>/etc/containerd/config.toml"-name:Change /etc/containerd/config.toml file SystemdCgroup to trueansible.builtin.replace:path:/etc/containerd/config.tomlafter:'plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options'before:'plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]'regexp:'SystemdCgroup.*'replace:'SystemdCgroup=true'diff:yes-name:Run the equivalent of "apt-get update" as a separate stepansible.builtin.apt:update_cache:yes-name:Install Kubernetes Packagesansible.builtin.apt:name:"{{kubepackages}}"state:present-name:Prevent kubelet from being upgradedansible.builtin.dpkg_selections:name:kubeletselection:hold-name:Prevent kubeadm from being upgradedansible.builtin.dpkg_selections:name:kubeadmselection:hold-name:Prevent kubectl from being upgradedansible.builtin.dpkg_selections:name:kubectlselection:hold-name:Prevent containerd.io from being upgradedansible.builtin.dpkg_selections:name:containerd.ioselection:hold-name:FIX CRICTRL erroransible.builtin.copy:content:|runtime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 2debug: falsepull-image-on-create: falsedest:/etc/crictl.yaml-name:Restart service cron on centos, in all cases, also issue daemon-reload to pick up config changesansible.builtin.systemd:state:restarteddaemon_reload:truename:containerd-name:Install docker-compose from official github repoget_url:url :https://github.com/docker/compose/releases/download/v2.15.1/docker-compose-linux-x86_64dest:/usr/local/bin/docker-composemode:'u+x,g+x'
Run this playbook:
ansible-playbookplaybook-install-kubernetes.yaml
Now we have 3 identical nodes which are waiting for us to install & configure Tailscale VPN and Kubernetes cluster.
Before we proceed, I would like to advise you some really useful links and tips. These are helpful especially if you are not familiar with Ansible and don't want to bother with that:
This step is optional. I want to simulate the situation when the nodes are not sitting in the same subnet, and can talk to each other only over the Tailscale VPN.
This way maybe easier to understand what we doing with the VPN.
I don't want to make it complicated, so simply disable the communication between node with iptables.
This is one of my favorite feature of Tailscale. You don't have to have stable static public IP address to use VPN service.
But keep in mind, that connection over relay server can be significantly slower than direct connection.
root@kube02-m1:~# echo "KUBELET_EXTRA_ARGS=--node-ip=$(tailscale ip --4)" | tee -a /etc/default/kubelet
KUBELET_EXTRA_ARGS=--node-ip=100.122.123.2
root@kube02-m1:~#
root@kube02-m2:~# echo "KUBELET_EXTRA_ARGS=--node-ip=$(tailscale ip --4)" | tee -a /etc/default/kubelet
KUBELET_EXTRA_ARGS=--node-ip=100.103.128.9
root@kube02-m2:~#
root@kube02-m3:~# echo "KUBELET_EXTRA_ARGS=--node-ip=$(tailscale ip --4)" | tee -a /etc/default/kubelet
KUBELET_EXTRA_ARGS=--node-ip=100.124.70.97
root@kube02-m3:~#
I won't want to waste a lot time for this task, since this is only a lab env with just one function: demonstrate the installation.
HAProxy is a really good example about how to configure an external Load Balancer for kubernetes control plane nodes.
frontend kubeapi
log global
bind *:6443
mode tcp
option tcplog
default_backend kubecontroleplain
backend kubecontroleplain
option httpchk GET /healthz
http-check expect status 200
mode tcp
log global
balance roundrobin
#option tcp-check
option ssl-hello-chk
server kube02-m1 kube02-m1.tailnet-a5cd.ts.net:6443 check
server kube02-m2 kube02-m2.tailnet-a5cd.ts.net:6443 check
server kube02-m3 kube02-m3.tailnet-a5cd.ts.net:6443 check
frontend stats
mode http
bind *:8404
stats enable
stats uri /stats
stats refresh 10s
stats admin if LOCALHOST
Warning
As I know HAProxy resolve DNS only once at startup. So use DNS name in server section with caution. If the IP address has changed, do not forget to restart HAProxy.
If you are looking for a solution without external Load Balancer you may want to take a look at Kube-Vip
Quote
kube-vip provides Kubernetes clusters with a virtual IP and load balancer for both the control plane (for building a highly-available cluster) and Kubernetes Services of type LoadBalancer without relying on any external hardware or software.
If you don't have separate HAProxy node, and you are using one kubernetes node, you should consider changing the --apiserver-bind-port port or the listen port of the HAProxy.
Important
pod-network-cidr and service-cidr is required by flannel CNI.
Important
Do not forget the --upload-certs option, otherwise additional control plane nodes won't be able to join the cluster without extra steps.
root@kube02-m1:# kubeadm init --cri-socket /var/run/containerd/containerd.sock \
--control-plane-endpoint kube02-haproxy.tailnet-a5cd.ts.net \
--apiserver-advertise-address $(tailscale ip --4) \
--pod-network-cidr 10.25.0.0/16 \
--service-cidr 10.26.0.0/16 \
--upload-certs
W0421 16:13:16.891232 25655 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/containerd/containerd.sock". Please update your configuration!
I0421 16:13:17.241235 25655 version.go:256] remote version is much newer: v1.27.1; falling back to: stable-1.26
[init] Using Kubernetes version: v1.26.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kube02-haproxy.tailnet-a5cd.ts.net kube02-m1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.26.0.1 100.122.123.2]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kube02-m1 localhost] and IPs [100.122.123.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kube02-m1 localhost] and IPs [100.122.123.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 101.038704 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
2f2caa21e13d7f4bece27faa2515d024c8b4e93e08d8d21612113a7ebacff5ea
[mark-control-plane] Marking the node kube02-m1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kube02-m1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 1q32dn.swfpr7qj89hl2g4j
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join kube02-haproxy.tailnet-a5cd.ts.net:6443 --token 1q32dn.swfpr7qj89hl2g4j \
--discovery-token-ca-cert-hash sha256:11c669ee4e4e27b997ae5431133dd2cd7c6a2050ddd16b38bee8bee544bbe680 \
--control-plane --certificate-key 2f2caa21e13d7f4bece27faa2515d024c8b4e93e08d8d21612113a7ebacff5ea
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join kube02-haproxy.tailnet-a5cd.ts.net:6443 --token 1q32dn.swfpr7qj89hl2g4j \
--discovery-token-ca-cert-hash sha256:11c669ee4e4e27b997ae5431133dd2cd7c6a2050ddd16b38bee8bee544bbe680
root@kube02-m2:# kubeadm join kube02-haproxy.tailnet-a5cd.ts.net:6443 --token 1q32dn.swfpr7qj89hl2g4j --apiserver-advertise-address $(tailscale ip --4) --cri-socket /var/run/containerd/containerd.sock --discovery-token-ca-cert-hash sha256:11c669ee4e4e27b997ae5431133dd2cd7c6a2050ddd16b38bee8bee544bbe680 --control-plane --certificate-key 2f2caa21e13d7f4bece27faa2515d024c8b4e93e08d8d21612113a7ebacff5ea
W0421 16:23:11.602945 26931 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/containerd/containerd.sock". Please update your configuration!
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki"
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kube02-haproxy.tailnet-a5cd.ts.net kube02-m2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.26.0.1 100.103.128.9]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kube02-m2 localhost] and IPs [100.103.128.9 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kube02-m2 localhost] and IPs [100.103.128.9 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node kube02-m2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kube02-m2 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.