4.6 添加worker节点到k8s集群
接下来把另外的两个worker节点也加入到k8s集群。
kubeadm init的时候输出了如下这句:kubeadm join 192.168.110.130:6443 --token nta3x4.3e54l2dqtmj9tlry --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8 ,在另外两个worker节点执行这一命令就可以把节点加入到k8s集群里。
如果加入集群的token忘了,可以使用如下的命令获取最新的加入命令token
在另外两个节点执行加入集群的token命令
在k8scloude1查看节点状态,可以看到两个worker节点都加入到了k8s集群
可以发现worker节点加入到k8s集群后多了两个镜像
4.7 部署CNI网络插件calico
虽然现在k8s集群已经有1个master节点,2个worker节点,但是此时三个节点的状态都是NotReady的,原因是没有CNI网络插件,为了节点间的通信,需要安装cni网络插件,常用的cni网络插件有calico和flannel,两者区别为:flannel不支持复杂的网络策略,calico支持网络策略,因为今后还要配置k8s网络策略networkpolicy,所以本文选用的cni网络插件为calico!
现在去官网下载calico.yaml文件:
官网:https://projectcalico.docs.tigera.io/about/about-calico
搜索框里直接搜索calico.yaml
找到下载calico.yaml的命令
下载calico.yaml文件
查看需要下载的calico镜像,这四个镜像需要在所有节点都下载,以k8scloude1为例
修改calico.yaml 文件,CALICO_IPV4POOL_CIDR的IP段要和kubeadm初始化时候的pod网段一致,注意格式要对齐,不然会报错
不直观的话看图片:修改calico.yaml 文件
应用calico.yaml文件
此时发现三个节点都是Ready状态了
4.8 配置kubectl命令tab键自动补全
查看kubectl自动补全命令
添加source <(kubectl completion bash)到/etc/profile,并使配置生效
此时即可kubectl命令tab键自动补全
自此,Kubernetes(k8s)集群部署完毕!
感兴趣的猴子可以看看教程一 二
教程一:Centos7系统安装部署Kubernetes(k8s)集群教程( 一 )
教程二:Centos7系统安装部署Kubernetes(k8s)集群教程( 二 )
接下来把另外的两个worker节点也加入到k8s集群。
kubeadm init的时候输出了如下这句:kubeadm join 192.168.110.130:6443 --token nta3x4.3e54l2dqtmj9tlry --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8 ,在另外两个worker节点执行这一命令就可以把节点加入到k8s集群里。
如果加入集群的token忘了,可以使用如下的命令获取最新的加入命令token
- [root@k8scloude1 ~]# kubeadm token create --print-join-command
- kubeadm join 192.168.110.130:6443 --token 8e3haz.m1wrpuf357g72k1u --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
在另外两个节点执行加入集群的token命令
- [root@k8scloude2 ~]# kubeadm join 192.168.110.130:6443 --token 8e3haz.m1wrpuf357g72k1u --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
- [preflight] Running pre-flight checks
- [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
- [preflight] Reading configuration from the cluster...
- [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
- [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
- [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
- [kubelet-start] Starting the kubelet
- [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
- This node has joined the cluster:
- * Certificate signing request was sent to apiserver and a response was received.
- * The Kubelet was informed of the new secure connection details.
- Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
- [root@k8scloude3 ~]# kubeadm join 192.168.110.130:6443 --token 8e3haz.m1wrpuf357g72k1u --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
- [preflight] Running pre-flight checks
- [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
- [preflight] Reading configuration from the cluster...
- [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
- [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
- [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
- [kubelet-start] Starting the kubelet
- [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
- This node has joined the cluster:
- * Certificate signing request was sent to apiserver and a response was received.
- * The Kubelet was informed of the new secure connection details.
- Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在k8scloude1查看节点状态,可以看到两个worker节点都加入到了k8s集群
- [root@k8scloude1 ~]# kubectl get nodes
- NAME STATUS ROLES AGE VERSION
- k8scloude1 NotReady control-plane,master 8m43s v1.21.0
- k8scloude2 NotReady <none> 28s v1.21.0
- k8scloude3 NotReady <none> 25s v1.21.0
可以发现worker节点加入到k8s集群后多了两个镜像
- [root@k8scloude2 ~]# docker p_w_picpath
- REPOSITORY TAG IMAGE ID CREATED SIZE
- registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 9 months ago 122MB
- registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 12 months ago 683kB
- [root@k8scloude3 ~]# docker p_w_picpath
- REPOSITORY TAG IMAGE ID CREATED SIZE
- registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 9 months ago 122MB
- registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 12 months ago 683kB
4.7 部署CNI网络插件calico
虽然现在k8s集群已经有1个master节点,2个worker节点,但是此时三个节点的状态都是NotReady的,原因是没有CNI网络插件,为了节点间的通信,需要安装cni网络插件,常用的cni网络插件有calico和flannel,两者区别为:flannel不支持复杂的网络策略,calico支持网络策略,因为今后还要配置k8s网络策略networkpolicy,所以本文选用的cni网络插件为calico!
现在去官网下载calico.yaml文件:
官网:https://projectcalico.docs.tigera.io/about/about-calico
搜索框里直接搜索calico.yaml
找到下载calico.yaml的命令
下载calico.yaml文件
- [root@k8scloude1 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O
- % Total % Received % Xferd Average Speed Time Time Time Current
- Dload Upload Total Spent Left Speed
- 100 212k 100 212k 0 0 44222 0 0:00:04 0:00:04 --:--:-- 55704
- [root@k8scloude1 ~]# ls
- calico.yaml
查看需要下载的calico镜像,这四个镜像需要在所有节点都下载,以k8scloude1为例
- [root@k8scloude1 ~]# grep image calico.yaml
- image: docker.io/calico/cni:v3.21.2
- image: docker.io/calico/cni:v3.21.2
- image: docker.io/calico/pod2daemon-flexvol:v3.21.2
- image: docker.io/calico/node:v3.21.2
- image: docker.io/calico/kube-controllers:v3.21.2
- [root@k8scloude1 ~]# docker pull docker.io/calico/cni:v3.21.2
- v3.21.2: Pulling from calico/cni
- Digest: sha256:ce618d26e7976c40958ea92d40666946d5c997cd2f084b6a794916dc9e28061b
- Status: Image is up to date for calico/cni:v3.21.2
- docker.io/calico/cni:v3.21.2
- [root@k8scloude1 ~]# docker pull docker.io/calico/pod2daemon-flexvol:v3.21.2
- v3.21.2: Pulling from calico/pod2daemon-flexvol
- Digest: sha256:b034c7c886e697735a5f24e52940d6d19e5f0cb5bf7caafd92ddbc7745cfd01e
- Status: Image is up to date for calico/pod2daemon-flexvol:v3.21.2
- docker.io/calico/pod2daemon-flexvol:v3.21.2
- [root@k8scloude1 ~]# docker pull docker.io/calico/node:v3.21.2
- v3.21.2: Pulling from calico/node
- Digest: sha256:6912fe45eb85f166de65e2c56937ffb58c935187a84e794fe21e06de6322a4d0
- Status: Image is up to date for calico/node:v3.21.2
- docker.io/calico/node:v3.21.2
- [root@k8scloude1 ~]# docker pull docker.io/calico/kube-controllers:v3.21.2
- v3.21.2: Pulling from calico/kube-controllers
- d6a693444ed1: Pull complete
- a5399680e995: Pull complete
- 8f0eb4c2bcba: Pull complete
- 52fe18e41b06: Pull complete
- 2f8d3f9f1a40: Pull complete
- bc94a7e3e934: Pull complete
- 55bf7cf53020: Pull complete
- Digest: sha256:1f4fcdcd9d295342775977b574c3124530a4b8adf4782f3603a46272125f01bf
- Status: Downloaded newer image for calico/kube-controllers:v3.21.2
- docker.io/calico/kube-controllers:v3.21.2
- #主要是如下4个镜像
- [root@k8scloude1 ~]# docker p_w_picpath
- REPOSITORY TAG IMAGE ID CREATED SIZE
- calico/node v3.21.2 f1bca4d4ced2 4 weeks ago 214MB
- calico/pod2daemon-flexvol v3.21.2 7778dd57e506 5 weeks ago 21.3MB
- calico/cni v3.21.2 4c5c32530391 5 weeks ago 239MB
- calico/kube-controllers v3.21.2 b20652406028 5 weeks ago 132MB
修改calico.yaml 文件,CALICO_IPV4POOL_CIDR的IP段要和kubeadm初始化时候的pod网段一致,注意格式要对齐,不然会报错
- [root@k8scloude1 ~]# vim calico.yaml
- [root@k8scloude1 ~]# cat calico.yaml | egrep "CALICO_IPV4POOL_CIDR|"10.244""
- - name: CALICO_IPV4POOL_CIDR
- value: "10.244.0.0/16"
不直观的话看图片:修改calico.yaml 文件
应用calico.yaml文件
- [root@k8scloude1 ~]# kubectl apply -f calico.yaml
- configmap/calico-config unchanged
- customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
- customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
- clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
- clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
- clusterrole.rbac.authorization.k8s.io/calico-node unchanged
- clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
- daemonset.apps/calico-node created
- serviceaccount/calico-node created
- deployment.apps/calico-kube-controllers created
- serviceaccount/calico-kube-controllers created
- Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
- poddisruptionbudget.policy/calico-kube-controllers created
此时发现三个节点都是Ready状态了
- [root@k8scloude1 ~]# kubectl get nodes
- NAME STATUS ROLES AGE VERSION
- k8scloude1 Ready control-plane,master 53m v1.21.0
- k8scloude2 Ready <none> 45m v1.21.0
- k8scloude3 Ready <none> 45m v1.21.0
4.8 配置kubectl命令tab键自动补全
查看kubectl自动补全命令
- [root@k8scloude1 ~]# kubectl --help | grep bash
- completion Output shell completion code for the specified shell (bash or zsh)
添加source <(kubectl completion bash)到/etc/profile,并使配置生效
- [root@k8scloude1 ~]# cat /etc/profile | head -2
- # /etc/profile
- source <(kubectl completion bash)
- [root@k8scloude1 ~]# source /etc/profile
此时即可kubectl命令tab键自动补全
- [root@k8scloude1 ~]# kubectl get nodes
- NAME STATUS ROLES AGE VERSION
- k8scloude1 Ready control-plane,master 59m v1.21.0
- k8scloude2 Ready <none> 51m v1.21.0
- k8scloude3 Ready <none> 51m v1.21.0
- #注意:需要bash-completion-2.1-6.el7.noarch包,不然不能自动补全命令
- [root@k8scloude1 ~]# rpm -qa | grep bash
- bash-completion-2.1-6.el7.noarch
- bash-4.2.46-30.el7.x86_64
- bash-doc-4.2.46-30.el7.x86_64
自此,Kubernetes(k8s)集群部署完毕!
感兴趣的猴子可以看看教程一 二
教程一:Centos7系统安装部署Kubernetes(k8s)集群教程( 一 )
教程二:Centos7系统安装部署Kubernetes(k8s)集群教程( 二 )