目标

  • 1.在所有节点上安装docker和kubeadm
  • 2.部署kubernetes master
  • 3.部署容器网络插件
  • 4.部署kubernetes worker
  • 5.部署dashboard可视化插件
  • 6.部署容器存储插件

操作系统版本

1
2
3
4
5
6
7
8
$ cat /proc/version
# Linux version 5.0.1-1.el7.elrepo.x86_64 (mockbuild@Build64R7) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC)) #1 SMP Sun Mar 10 10:09:55 EDT 2019 bash

$ rpm -q centos-release
# centos-release-7-2.1511.el7.centos.2.10.x86_64

$ cat /etc/redhat-release
# CentOS Linux release 7.2.1511 (Core)

准备

所有安装机器均需执行。

给服务器配置hostname为k8s

只修改static的hostname

1
2
3
4
5
6
7
8
# 修改hostname
# echo "k8s" > /etc/hostname
hostnamectl --static set-hostname k8s
# 不重启情况下使内核生效
sysctl kernel.hostname=k8s
# 验证是否修改成功
$ hostname
# 输出内容为:k8s,表示设置成功

hosts中添加ip和host的映射

切记vi /etc/hosts,将里面的老hostname修改为新的hostname,然后重启动机器:

一旦修改了静态主机名,/etc/hostname将被自动更新。然而,/etc/hosts不会更新以保存所做的修改,所以你每次在修改主机名后一定要手动更新/etc/hosts,之后再重启 CentOS 7。否则系统再启动时会很慢。

master和node做互信

设置ssh使服务器之间互信。

关闭防火墙和selinux[所有机器]

  • 关闭selinux
1
2
3
4
5
$ sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
# 设置 SELinux 的工作模式
setenforce 0
# 0: 切换成 permissive(宽容模式)
# 1: 切换成 enforcing(强制模式)
  • 关闭防火墙
1
2
$ sudo systemctl disable firewalld.service
$ sudo systemctl stop firewalld.service
  • 关闭iptables
1
2
3
$ sudo systemctl disable iptables.service
$ sudo systemctl stop iptables.service

  • 关闭swap
1
swapoff -a

网络环境配置

  • 方法一:
1
2
3
4
5
6
7
8
9
$ sudo vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0

$ sudo sysctl -p
$ cat /proc/sys/net/bridge/bridge-nf-call-iptables
$ cat /proc/sys/net/bridge/bridge-nf-call-ip6tables

确保最后两个输出值都是1。

  • 方法二:
1
2
3
4
5
6
7
$ vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0

$ sudo sysctl --system

安装docker

将离线包解压,进docker目录,运行下面命令进行安装:

1
sudo yum install *

安装好后再执行下面命令,将docker服务启动。

1
2
$ sudo systemctl start docker
$ sudo systemctl enable docker
  • 导入依赖镜像
    进入kubedep目录,运行:
1
$ sudo docker load --input k8s-images-1.16.3.tar 

导入后,运行:

1
$ sudo docker image ls

应该可以看到如下镜像:

安装kubeadm

进入 kubeadm目录,运行

1
$ sudo yum install *
  • kubelet配置修改并启动
1
2
$ sudo systemctl enable kubelet 
$ echo KUBELET_EXTRA_ARGS="\"--fail-swap-on=false\"" > /etc/sysconfig/kubelet

master部署

master部署

用flannel的话除了版本号需要改,别的都可以不改。

1
$ kubeadm init --kubernetes-version=v1.16.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

执行后,如果成功,页面上会输出类似字符串,需要记录下来;如果失败,请参考后面的错误解决方法处理。

1
kubeadm join 10.58.12.180:6443 --token xiyqxn.6tj26z1t0f4lzf8i  --discovery-token-ca-cert-hash sha256:8f2cf235bd3dafd1c08a5ba20625b0a7b7a6a3fe6e686b4574764fd5784d6aca

如果执行失败,大部分情况是没有拉取到镜像或者本地的镜像错误的,可以通过制定仓库地址解决:

1
sudo kubeadm init --kubernetes-version=v1.16.3 --apiserver-advertise-address=10.58.12.180  --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap --image-repository registry.aliyuncs.com/google_containers

添加kubectl访问配置

1
2
3
$ mkdir -p $HOME/.kube 
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config

检查

1
$ kubectl get cs 

如果得到如下的显示,表示安装正确:

1
2
3
4
NAME                 STATUS    MESSAGE             ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}

其他可用命令:

1
2
kubectl get nodes
kubectl describe node <node-name>

worker部署

将master部署产生的字符串输入即完成部署。

1
$ kubeadm join 10.58.12.181:6443 --token xiyqxn.6tj26z1t0f4lzf8i  --discovery-token-ca-cert-hash sha256:8f2cf235bd3dafd1c08a5ba20625b0a7b7a6a3fe6e686b4574764fd5784d6aca

如果忘记了token或者token失效,使用以下命令重新创建一个token。 token默认24小时后过期:

  • 如果忘记了:
1
2
3
4
$ kubeadm token list

# TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
# xiyqxn.6tj26z1t0f4lzf8i 23h 2019-12-06T15:28:20+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
  • 如果失效了:
1
2
$ kubeadm token create
# xiyqxn.6tj26z1t0sdjsdwieiw

获取ca证书sha256编码hash值, 这个跟init安装时的值是一样的:

1
2
3
4
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

# 8f2cf235bd3dafd1c08a5ba20625b0a7b7a6a3fe6e686b4574764fd5784d6aca

网络模块安装(所有安装机器均需执行)

在没有安装网络模块的时候,我们通过:

1
kubectl get nodes

发现节点状态为NotReady。同事我们通过journalctl -xeu kubelet -f可以看到下面信息:

1
2
Dec 05 16:13:03 k8s-master kubelet[16088]: E1205 16:13:03.305530   16088 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 05 16:13:07 k8s-master kubelet[16088]: W1205 16:13:07.190183 16088 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d

执行以下命令安装flannel网络模块,但需要注意一点,flaanel模块要求kubeadm init阶段必须设置参数为--pod-network-cidr=10.244.0.0/16,否则会有意想不到的错误等着你:

1
2
3
4
5
# 如果没有kube-flannel.yml文件,可以通过下面命令下载,下载之后替换镜像 
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# sudo sed -i 's#quay.io/coreos/flannel:v0.11.0-amd64#jmgao1983/flannel:v0.11.0-amd64#g' kube-flannel.yml

$ kubectl apply -f kube-flannel.yml

执行后,输入ifconfig,看到flannel网卡,即完成安装:

1
2
3
4
5
6
7
8
9
10
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

环境重装

清除环境

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
kubectl drain k8s-master --delete-local-data --force --ignore-daemonsets
kubectl drain k8s-slave-1 --delete-local-data --force --ignore-daemonsets
kubectl drain k8s-slave-2 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s-master
kubectl delete node k8s-slave-1
kubectl delete node k8s-slave-2
kubeadm reset --ignore-preflight-errors=Swap
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/kubelet/
rm -rf /var/lib/cni/
rm -rf /var/lib/etcd/
rm -rf /etc/kubernetes/
rm -rf /etc/cni/
rm -rf .kube/config
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1

重新安装

1
2
3
4
5
6
7
8
systemctl start kubelet
systemctl enable kubelet
systemctl start docker
systemctl enable docker
kubeadm init --kubernetes-version=v1.16.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

坑的地方注意:pod-network-cidr需要和网络插件的网段地址严格一致,不然会出现各种问题
export KUBECONFIG=/etc/kubernetes/admin.conf (for root)

错误解决

安装遇到错误,需要重新reset的时候,会有一些错误提示信息。

1
sudo kubeadm reset

执行让面的命令之后,输出信息如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1203 10:29:38.745701 402 reset.go:96] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://10.58.12.180:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp 10.58.12.180:6443: connect: connection refused
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1203 10:29:40.341747 402 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

解决方法:

1
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X

reset后重新init初始化,但是这个过程一直有问题:

1
$ sudo kubeadm init --kubernetes-version=v1.16.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

init命令执行后,在当shell窗口输出信息Initial timeout of 40s passed.具体错误信息如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[init] Using Kubernetes version: v1.16.3
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.58.12.180]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.58.12.180 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.58.12.180 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

我们通过systemctl status kubelet发现kubelet启动失败了,因为缺少kubelet的配置文件/var/lib/kubelet/config.yaml,该文件在init阶段生成,暂时先不管。

我们继续通过命令journalctl -xeu kubelet -f查看具体的错误信息,大部分信息是node "xxx" not found

1
2
3
4
5
6
7
8
9
10
$ journalctl -xeu kubelet -f
-- Logs begin at Mon 2019-12-02 22:02:45 CST. --
Dec 03 11:35:37 k8s-master kubelet[19995]: E1203 11:35:37.923670 19995 kuberuntime_manager.go:783] container start failed: RunContainerError: failed to start container "af7dfdc4b24593488a3d8cfff572ed4144b7b944bad8c6675e33e8c67f528439": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"kube-apiserver\": executable file not found in $PATH": unknown
Dec 03 11:35:37 k8s-master kubelet[19995]: E1203 11:35:37.923712 19995 pod_workers.go:191] Error syncing pod 8ff291bc69bce31e35b99927ce184468 ("kube-apiserver-k8s-master_kube-system(8ff291bc69bce31e35b99927ce184468)"), skipping: failed to "StartContainer" for "kube-apiserver" with RunContainerError: "failed to start container \"af7dfdc4b24593488a3d8cfff572ed4144b7b944bad8c6675e33e8c67f528439\": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused \"exec: \\\"kube-apiserver\\\": executable file not found in $PATH\": unknown"
Dec 03 11:35:37 k8s-master kubelet[19995]: E1203 11:35:37.956583 19995 kubelet.go:2267] node "k8s-master" not found
Dec 03 11:35:38 k8s-master kubelet[19995]: E1203 11:35:38.031216 19995 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://10.58.12.180:6443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 10.58.12.180:6443: connect: connection refused
Dec 03 11:35:38 k8s-master kubelet[19995]: E1203 11:35:38.056859 19995 kubelet.go:2267] node "k8s-master" not found
Dec 03 11:35:38 k8s-master kubelet[19995]: E1203 11:35:38.157079 19995 kubelet.go:2267] node "k8s-master" not found
Dec 03 11:35:38 k8s-master kubelet[19995]: E1203 11:35:38.231060 19995 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://10.58.12.180:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 10.58.12.180:6443: connect: connection refused
Dec 03 11:35:38 k8s-master kubelet[19995]: E1203 11:35:38.257322 19995 kubelet.go:2267] node "k8s-master" not found

参考了网上很多解决方法,都无济于事,最后发现是自己制作镜像的时候Dockerfile搞错了。

安装完网络模块之后通过journalctl -xeu kubelet -f发现如下错误:

1
2
3
4
5
6
7
8
9
bash
Dec 05 16:15:40 k8s-master kubelet[16088]: E1205 16:15:40.504199 16088 cni.go:358] Error adding kube-system_coredns-5644d7b6d9-kwtkz/770279cf6076b08149cf73b176eb8af441a6ebb90e03b2ea5912e884d5343604 to network flannel/cbr0: failed to set bridge addr: could not add IP address to "cni0": file exists
Dec 05 16:15:40 k8s-master kubelet[16088]: E1205 16:15:40.681543 16088 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to set up sandbox container "770279cf6076b08149cf73b176eb8af441a6ebb90e03b2ea5912e884d5343604" network for pod "coredns-5644d7b6d9-kwtkz": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-kwtkz_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": file exists
Dec 05 16:15:40 k8s-master kubelet[16088]: E1205 16:15:40.681632 16088 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-5644d7b6d9-kwtkz_kube-system(a78220d8-35ed-4b87-8388-11fc2bdb29c0)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "770279cf6076b08149cf73b176eb8af441a6ebb90e03b2ea5912e884d5343604" network for pod "coredns-5644d7b6d9-kwtkz": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-kwtkz_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": file exists
Dec 05 16:15:40 k8s-master kubelet[16088]: E1205 16:15:40.681651 16088 kuberuntime_manager.go:710] createPodSandbox for pod "coredns-5644d7b6d9-kwtkz_kube-system(a78220d8-35ed-4b87-8388-11fc2bdb29c0)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "770279cf6076b08149cf73b176eb8af441a6ebb90e03b2ea5912e884d5343604" network for pod "coredns-5644d7b6d9-kwtkz": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-kwtkz_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": file exists
Dec 05 16:15:40 k8s-master kubelet[16088]: E1205 16:15:40.681836 16088 pod_workers.go:191] Error syncing pod a78220d8-35ed-4b87-8388-11fc2bdb29c0 ("coredns-5644d7b6d9-kwtkz_kube-system(a78220d8-35ed-4b87-8388-11fc2bdb29c0)"), skipping: failed to "CreatePodSandbox" for "coredns-5644d7b6d9-kwtkz_kube-system(a78220d8-35ed-4b87-8388-11fc2bdb29c0)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-5644d7b6d9-kwtkz_kube-system(a78220d8-35ed-4b87-8388-11fc2bdb29c0)\" failed: rpc error: code = Unknown desc = failed to set up sandbox container \"770279cf6076b08149cf73b176eb8af441a6ebb90e03b2ea5912e884d5343604\" network for pod \"coredns-5644d7b6d9-kwtkz\": networkPlugin cni failed to set up pod \"coredns-5644d7b6d9-kwtkz_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": file exists"
Dec 05 16:15:40 k8s-master kubelet[16088]: W1205 16:15:40.894386 16088 docker_sandbox.go:394] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-5644d7b6d9-kwtkz_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "770279cf6076b08149cf73b176eb8af441a6ebb90e03b2ea5912e884d5343604"
Dec 05 16:15:40 k8s-master kubelet[16088]: W1205 16:15:40.895390 16088 pod_container_deletor.go:75] Container "770279cf6076b08149cf73b176eb8af441a6ebb90e03b2ea5912e884d5343604" not found in pod's containers
Dec 05 16:15:40 k8s-master kubelet[16088]: W1205 16:15:40.897212 16088 cni.go:328] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "770279cf6076b08149cf73b176eb8af441a6ebb90e03b2ea5912e884d5343604"