CentOS 7.6安装Kubernetes v1.15.1(4)

# kubeadm init --config kubeadm.yaml --ignore-preflight-errors=Swap
..........
 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: 

mkdir -p $HOME/.kube 
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.21:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \    --discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e

注意这一条命令需要保存好(添加集群使用)
kubeadm join 192.168.169.21:6443 –token 4qcl2f.gtl3h8e5kjltuo0r \ –discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e
上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。 其中有以下关键内容:

[kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”

[certs]生成相关的各种证书

[kubeconfig]生成相关的kubeconfig文件

[control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod

[bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到

下面的命令是配置常规用户如何使用kubectl访问集群:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

最后给出了将节点加入集群的命令kubeadm join 192.168.169.21:6443 –token 4qcl2f.gtl3h8e5kjltuo0r \ –discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e

查看一下集群状态,确认个组件都处于healthy状态:

# kubectl get cs
NAME                STATUS    MESSAGE            ERROR
controller-manager  Healthy  ok                 
scheduler            Healthy  ok                 
etcd-0              Healthy  {"health":"true"}

集群初始化如果遇到问题,可以使用下面的命令进行清理:

# kubeadm reset
# ifconfig cni0 down
# ip link delete cni0
# ifconfig flannel.1 down
# ip link delete flannel.1
# rm -rf /var/lib/cni/

3.3 安装Pod Network

接下来安装flannel network add-on:

# mkdir -p ~/k8s/
# cd ~/k8s
# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f  kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

这里注意kube-flannel.yml这个文件里的flannel的镜像是0.11.0,quay.io/coreos/flannel:v0.11.0-amd64
如果Node有多个网卡的话,参考flannel issues 39701,目前需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=<iface-name>

containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth1
......

使用kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态。

# kubectl get pod -n kube-system
NAME                            READY  STATUS    RESTARTS  AGE
coredns-5c98db65d4-dr8lf        1/1    Running  0          52m
coredns-5c98db65d4-lp8dg        1/1    Running  0          52m
etcd-node1                      1/1    Running  0          51m
kube-apiserver-node1            1/1    Running  0          51m
kube-controller-manager-node1  1/1    Running  0          51m
kube-flannel-ds-amd64-mm296    1/1    Running  0          44s
kube-proxy-kchkf                1/1    Running  0          52m
kube-scheduler-node1            1/1    Running  0          51m


3.4 测试集群DNS是否可用

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/4fa3b2c91310e50f22a02fd2a08049ce.html