搭建Kubernetes集群一定要注意相关组件、linux内核、docker等相应的版本
本文docker版本为: docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6
Kubernetes工具版本为:kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9
Kubernetes版本为:kube-apiserver:v1.20.9 kube-proxy:v1.20.9 kube-controller-manager:v1.20.9 kube-scheduler:v1.20.9 coredns:1.7.0 etcd:3.4.13-0 pause:3.2

1. 环境准备

1.1 机器环境

节点CPU核数必须是 :>= 2核 /内存要求必须是:>=2G,否则k8s无法启动。

DNS网络: 最好设置为 本地网络连通的DNS,否则网络不通,无法下载一些镜像。

linux内核: linux内核必须是 4 版本以上,因此必须把linux核心进行升级。

准备三台centos虚拟机或者云服务器。

节点hostnameIP
k8s-master01192.168.191.130
k8s-node01192.168.191.131
k8s-node02192.168.191.132

如果是虚拟机需要将centos的ip修改为静态ip。

1.2 hostname

将每台机器的hostname修改成上面表格所示。

[root@base1 ~]# hostnamectl set-hostname k8s-master01 --static
[root@base2 ~]# hostnamectl set-hostname k8s-node01 --static
[root@base3 ~]# hostnamectl set-hostname k8s-node02 --static

修改后可以通过 hostname 命令来查看。

1.3 网络设置

将每台虚拟机需要将centos的ip修改为静态ip。

可以参考这篇文章来更改:搭建Linux集群 | Lemon-CS

具体操作如下:

[root@base1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33

修改如下:

BOOTPROTO="static" #dhcp改为static 
ONBOOT="yes" #开机启用本配置
IPADDR=192.168.191.130 #静态IP 192.168.191.130/192.168.191.131/192.168.191.132
GATEWAY=192.168.8.2 #默认网关
NETMASK=255.255.255.0 #子网掩码
DNS1=114.114.114.114 #DNS 配置
DNS2=8.8.8.8 #DNS 配置

然后重启网卡

# 重启网卡命令
service network restart

1.4 配置IP host映射关系

在每台机器上配置ip和host的映射:

vim /etc/hosts

192.168.191.130 k8s-master01
192.168.191.131 k8s-node01
192.168.191.132 k8s-node02

然后通过如下命令可以复制到其他机器上:

scp /etc/hosts root@k8s-node01:/etc/hosts 
scp /etc/hosts root@k8s-node02:/etc/hosts

1.5 系统配置和初始化

安装依赖包

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git

设置防火墙为Iptables并设置空规则

# 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
# 置空iptables
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save

关闭selinux

# 闭swap分区【虚拟内存】并且永久关闭虚拟内存
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# 关闭selinux
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

升级Linux内核为4.44版本

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
#安装内核
yum --enablerepo=elrepo-kernel install -y kernel-lt
#设置开机从新内核启动
grub2-set-default 'CentOS Linux (4.4.248-1.el7.elrepo.x86_64) 7 (Core)'
4.4.248-1.el7.elrepo.x86_64

reboot
#注意:设置完内核后,需要重启服务器才会生效。
#查询内核
uname -r

调整内核参数,对于K8s

必备三调参数:开启bridge网桥模式,关闭ipv6协议
cat > kubernetes.conf << EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用swap空间,只有当系统OOM时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf 

sysctl -p /etc/sysctl.d/kubernetes.conf

报错1:显示/proc/sys/net/bridge/bridge-nf-call-iptables:没有这个文件或者目录

modprobe br_netfilter

报错2:显示sysctl: cannot stat /proc/sys/net/netfilter/nf_conntrack_max: 没有那个文件或目录

modprobe ip_conntrack

调整系统时区

# 设置系统时区为 中国/上海
timedatectl set-timezone Asia/Shanghai
# 将当前的UTC时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond

关闭系统不需要的服务

systemctl stop postfix && systemctl disable postfix

1.6 设置日志保存方式

创建保存日志的目录

mkdir /var/log/journal

创建配置文件存放目录

mkdir /etc/systemd/journald.conf.d

创建配置文件

cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
Storage=persistent
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
SystemMaxUse=10G
SystemMaxFileSize=200M
MaxRetentionSec=2week
ForwardToSyslog=no
EOF

重启systemd journald 的配置

systemctl restart systemd-journald

打开文件数调整(可忽略,不执行)

echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf

kube-proxy 开启 ipvs 前置条件

modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
#使用lsmod命令查看这些文件是否被引导
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 147456 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4 20480 0
nf_defrag_ipv4 16384 1 nf_conntrack_ipv4
nf_conntrack 114688 2 ip_vs,nf_conntrack_ipv4
libcrc32c 16384 2 xfs,ip_vs

2. Docker部署

在每台机器上安装部署docker。

2.1 移除以前docker相关包

yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine

2.2 配置yum源

yum install -y yum-utils

yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2.3 安装docker

# 这个会安装最新版
yum install -y docker-ce docker-ce-cli containerd.io

#以下是在安装k8s的时候使用
yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6

2.4 配置docker daemon文件

#创建/etc/docker目录
mkdir /etc/docker
#更新daemon.json文件
cat > /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
#注意:一定注意编码问题,出现错误---查看命令:journalctl -amu docker 即可发现错误
#创建,存储docker配置文件
mkdir -p /etc/systemd/system/docker.service.d

2.5 重启docker服务

systemctl daemon-reload && systemctl restart docker && systemctl enable docker

3. 安装kubelet、kubeadm、kubectl

在每台机器上安装kubelet、kubeadm、kubectl三个工具。

3.1 yum仓库镜像

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

3.2 安装kubeadm 、kubelet、kubectl

yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
#启动 kubelet
systemctl enable kubelet && systemctl start kubelet

4. 拉取Kubernetes镜像

4.1 在线拉取相关的镜像

在每台机器上拉取下载相关的镜像。

方式一

生成默认kubeadm.conf文件

kubeadm config print init-defaults > kubeadm.conf

编辑kubeadm.conf,将Kubernetes版本修改为v1.20.9

下载镜像

kubeadm config images pull --config kubeadm.conf
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.20.9
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.20.9
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.20.9
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.20.9
[config/images] Pulled k8s.gcr.io/pause:3.2
[config/images] Pulled k8s.gcr.io/etcd:3.4.13-0
[config/images] Pulled k8s.gcr.io/coredns:1.7.0

docker images
k8s.gcr.io/kube-proxy v1.20.9 e3f6fcd87756 11 days ago 118MB
k8s.gcr.io/kube-apiserver v1.20.9 75c7f7112080 11 days ago 122MB
k8s.gcr.io/kube-controller-manager v1.20.9 2893d78e47dc 11 days ago 116MB
k8s.gcr.io/kube-scheduler v1.20.9 4aa0b4397bbb 11 days ago 46.4MB
k8s.gcr.io/etcd 3.4.13-0 0369cf4303ff 4 months ago 253MB
k8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 6 months ago 45.2MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 10 months ago 683kB

方式二

生成一个下载脚本:

sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF

给脚本添加执行的权限:

chmod +x ./images.sh && ./images.sh

可以在每台机器上按照上面的方式执行一遍。

也可以保存镜像再导入到其他机器上。

保存镜像

mkdir kubeadm-basic.images

cd kubeadm-basic.images
docker save k8s.gcr.io/kube-apiserver:v1.20.9 > apiserver.tar
docker save k8s.gcr.io/coredns:1.7.0 > coredns.tar
docker save k8s.gcr.io/etcd:3.4.13-0 > etcd.tar
docker save k8s.gcr.io/kube-controller-manager:v1.20.9 > kubec-con-man.tar
docker save k8s.gcr.io/pause:3.2 > pause.tar
docker save k8s.gcr.io/kube-proxy:v1.20.9 > proxy.tar
docker save k8s.gcr.io/kube-scheduler:v1.20.9 > scheduler.tar

cd ..
tar zcvf kubeadm-basic.images.tar.gz kubeadm-basic.images

4.2 离线镜像

kubeadm-basic.images.tar.gz

上传镜像压缩包,把压缩包中的镜像导入到本地镜像仓库。

[root@k8s-master01 ~]# ll
total 216676
-rw-------. 1 root root 1391 Dec 22 04:42 anaconda-ks.cfg
drwxr-xr-x 2 root root 142 Dec 30 07:55 kubeadm-basic.images
-rw-r--r-- 1 root root 221857746 Dec 30 08:01 kubeadm-basic.images.tar.gz
-rw-r--r-- 1 root root 827 Dec 30 07:34 kubeadm.conf
-rw-r--r-- 1 root root 20 Dec 30 07:00 kube-images.tar.gz
-rw-r--r-- 1 root root 364 Dec 30 03:40 kubernetes.conf
[root@k8s-master01 ~]# ll kubeadm-basic.images
total 692188
-rw-r--r-- 1 root root 122923520 Dec 30 07:54 apiserver.tar
-rw-r--r-- 1 root root 45364736 Dec 30 07:54 coredns.tar
-rw-r--r-- 1 root root 254677504 Dec 30 07:54 etcd.tar
-rw-r--r-- 1 root root 117107200 Dec 30 07:54 kubec-con-man.tar
-rw-r--r-- 1 root root 691712 Dec 30 07:55 pause.tar
-rw-r--r-- 1 root root 120377856 Dec 30 07:55 proxy.tar
-rw-r--r-- 1 root root 47643136 Dec 30 07:55 scheduler.tar

编写脚本问题,导入镜像包到本地docker镜像仓库:

# kubeadm 初始化 k8s 集群的时候,会从gce Google云中下载响应的镜像,且镜像相对比较大,下载比较慢
#1 导入镜像脚本代码(在任意目录下创建sh脚本文件:image-load.sh)
#! /bin/bash
#注意 镜像解压的目录位置
ls /root/kubeadm-basic.images > /tmp/images-list.txt
cd /root/kubeadm-basic.images
for i in $(cat /tmp/images-list.txt)
do
docker load -i $i
done
rm -rf /tmp/images-list.txt

#2 修改权限,可执行权限
chmod 755 image-load.sh

#3 开始执行,镜像导入
./image-load.sh

#4 传输文件及镜像到其他node节点
#拷贝到knode1节点
scp -r image-load.sh kubeadm-basic.images root@k8s-node01:/root/
#拷贝到knode2
scp -r image-load.sh kubeadm-basic.images root@k8s-node02:/root/

4.3 node节点导入镜像

k8s-node01 导入镜像

[root@k8s-node01 ~]# ./image-load.sh
Loaded image: k8s.gcr.io/kube-apiserver:v1.20.9
Loaded image: k8s.gcr.io/coredns:1.7.0
Loaded image: k8s.gcr.io/etcd:3.4.13-0
Loaded image: k8s.gcr.io/kube-controller-manager:v1.20.9
Loaded image: k8s.gcr.io/pause:3.2
Loaded image: k8s.gcr.io/kube-proxy:v1.20.9
Loaded image: k8s.gcr.io/kube-scheduler:v1.20.9
[root@k8s-node01 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.20.9 e3f6fcd87756 11 days ago 118MB
k8s.gcr.io/kube-apiserver v1.20.9 75c7f7112080 11 days ago 122MB
k8s.gcr.io/kube-controller-manager v1.20.9 2893d78e47dc 11 days ago 116MB
k8s.gcr.io/kube-scheduler v1.20.9 4aa0b4397bbb 11 days ago 46.4MB
k8s.gcr.io/etcd 3.4.13-0 0369cf4303ff 4 months ago 253MB
k8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 6 months ago 45.2MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 10 months ago 683kB

k8s-node02 导入镜像

[root@k8s-node02 ~]# ./image-load.sh
Loaded image: k8s.gcr.io/kube-apiserver:v1.20.9
Loaded image: k8s.gcr.io/coredns:1.7.0
Loaded image: k8s.gcr.io/etcd:3.4.13-0
Loaded image: k8s.gcr.io/kube-controller-manager:v1.20.9
Loaded image: k8s.gcr.io/pause:3.2
Loaded image: k8s.gcr.io/kube-proxy:v1.20.9
Loaded image: k8s.gcr.io/kube-scheduler:v1.20.9
[root@k8s-node02 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.20.9 e3f6fcd87756 11 days ago 118MB
k8s.gcr.io/kube-apiserver v1.20.9 75c7f7112080 11 days ago 122MB
k8s.gcr.io/kube-controller-manager v1.20.9 2893d78e47dc 11 days ago 116MB
k8s.gcr.io/kube-scheduler v1.20.9 4aa0b4397bbb 11 days ago 46.4MB
k8s.gcr.io/etcd 3.4.13-0 0369cf4303ff 4 months ago 253MB
k8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 6 months ago 45.2MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 10 months ago 683kB

5. 搭建Kubernetes集群

5.1 初始化主节点

再主节点,即Master机器上执行如下初始化命令:

#主节点初始化
kubeadm init \
--apiserver-advertise-address=192.168.191.130 \
--control-plane-endpoint=k8s-master01 \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.169.0.0/16

这个命令一定要注意的地方是:

  • apiserver-advertise-address
  • service-cidr
  • pod-network-cidr
  • docker0

这四个ip的网络范围一定不能重叠,而pod-network-cidr为192.169.0.0/16,在后面配置网络的时候需要将网络也改成192.169.0.0/16,因为网络的默认是192.168.0.0/16,这个地址与我的虚拟机ip地址重叠了, 后面会报错。

我一开始的配置为如下所示:

#主节点初始化
kubeadm init \
--apiserver-advertise-address=192.168.191.130 \
--control-plane-endpoint=k8s-master01 \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16

这里虚拟机ip和Kubernetes集群网络ip范围就重叠了,最终会出错。

主节点初始化成功后会有如下信息:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \
--discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3 \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \
--discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

按照上面的提示在主节点上执行以下命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#查看集群所有节点
kubectl get nodes

#根据配置文件,给集群创建资源
kubectl apply -f xxxx.yaml

#查看集群部署了哪些应用?
docker ps === kubectl get pods -A
# 运行中的应用在docker里面叫容器,在k8s里面叫Pod
kubectl get pods -A

此时查看集群中的节点会发现主节点还是 NotReady 状态,是需要配置网络才行。

5.2 安装网络组件

下载calico.yaml

curl https://docs.projectcalico.org/manifests/calico.yaml -O

注意此时需要修改下载下来的calico.yaml中网络的ip范围,因为上面我的主节点初始化命令上修改了默认值。

默认值为 192.168.0.0/16,修改为我们配置的 192.169.0.0/16

通过如下命令来修改:

vim calico.yaml

进入该文件后,搜索需要修改的地方。
通过下划线来搜索,即 /192.168,找到要修改的地方,然后修改为 169 就行。

修改后通过如下命令来部署网络:

kubectl apply -f calico.yaml

5.3 加入Node节点

也就是加入工作节点(从节点)。

通过上面说的命令来添加节点到集群中:

kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \
--discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

这个token只有24个小时的有效期

如果需要新令牌来追加新的工作节点,则可以通过如下命令来申请新的token。

kubeadm token create --print-join-command

高可用部署方式,也是在这一步的时候,使用添加主节点的命令即可

5.4 验证集群状态

[root@k8s-master01 config]# kubectl get node

NAME STATUS ROLES AGE VERSION

k8s-master01 Ready control-plane,master 3h41m v1.20.9

k8s-node01 Ready 3h36m v1.20.9

k8s-node02 Ready 3h36m v1.20.9
[root@k8s-master01 config]# kubectl get pod -n kube-system -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

calico-kube-controllers-858c9597c8-zclgq 1/1 Running 0 3h41m 192.169.32.131 k8s-master01

calico-node-7sn5n 1/1 Running 0 3h36m 192.168.191.132 k8s-node02

calico-node-bddnl 1/1 Running 0 3h36m 192.168.191.131 k8s-node01

calico-node-pjv6x 1/1 Running 0 3h41m 192.168.191.130 k8s-master01

coredns-5897cd56c4-czv9k 1/1 Running 0 3h41m 192.169.32.130 k8s-master01

coredns-5897cd56c4-g9qph 1/1 Running 0 3h41m 192.169.32.129 k8s-master01

etcd-k8s-master01 1/1 Running 0 3h41m 192.168.191.130 k8s-master01

kube-apiserver-k8s-master01 1/1 Running 0 3h41m 192.168.191.130 k8s-master01

kube-controller-manager-k8s-master01 1/1 Running 0 3h41m 192.168.191.130 k8s-master01

kube-proxy-flrwx 1/1 Running 0 3h36m 192.168.191.132 k8s-node02

kube-proxy-knnxn 1/1 Running 0 3h36m 192.168.191.131 k8s-node01

kube-proxy-lv2f7 1/1 Running 0 3h41m 192.168.191.130 k8s-master01

kube-scheduler-k8s-master01 1/1 Running 0 3h41m 192.168.191.130 k8s-master01

6. 部署dashboard

kubernetes官方提供的可视化界面

https://github.com/kubernetes/dashboard

6.1 下载配置文件

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

然后执行如下命令:

kubectl apply -f recommended.yaml

6.2 修改dashboard的yaml文件

修改service部分,默认service是ClusterIP类型,这里改称NodePort类型,是集群外部能否访问

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

修改为:

 kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30012
selector:
k8s-app: kubernetes-dashboard

如果是云服务器还需要:

kubectl get svc -A |grep kubernetes-dashboard
## 找到端口,在安全组放行

访问: https://集群任意IP:端口 https://192.168.191.130:31085

注意这里可能使用Chrome浏览器无法访问,具体可以参考这篇文章:(11条消息) [kubernetes]-安装dashboard2.0并解决谷歌浏览器无法访问dashboard的问题_爷来辣的博客-CSDN博客_dashboard无法访问

我是使用火狐浏览器来访问的。

这里需要一个登录的令牌,要获取令牌,还要新建用户

6.3 新建用户和令牌

新建用户:

vi dash.yaml
#创建访问账号,准备一个yaml文件;
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard

获取令牌:

kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token

或者

#按照官网提示的获取token方法:
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

eyJhbGciOiJSUzI1NiIsImtpZCI6IkxabUJQSVZmOUdSdGdCNy1maC1lcGhHeFI0Y1JQc20tOTVQTGJVTXJXZEEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1qbGxrbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY3OTZkZTExLWRjMDQtNGRmZC04M2QxLTc1OTVlY2U1Nzc2ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.V6zSf7WxAGH3CqAymNCo9l9YuwXMFueIEql6VOgO47TFLRVHwU_mkgNYu1fowBK7tgv-2DJIcrNkHoKOEKGw8v9uNiD4PbIIsNQVxy1yxw-dWkPr6eDwqLK0ZcSSuu_ZvVx0s5aJS06hNrTgKNsx3FUeaiDOniwWhRg7_zjWsu-qnvEi6GRgGM1sq-t6nh52jp7QbgnQJFbJxgIEQRsFscD-kuSqsEwuUEMZNAHymm6co0SZIa1i5ibB4OXRWxnH4WB5s3pYzKgOuqSXjulF_Yk1F0g9BUzYKGqE66ru2wAzesOdV8zboqEqwyRpAV2Uqb2gtyVYfMEKPPDFJgGwBw

输入令牌打开页面

输入令牌后,点登入,就打开了首页,如图所示:

7. Kubernetes集群重新初始化

  1. 首先移除所有工作节点
#清理所有pods 
kubectl delete node --all
  1. 所有工作节点删除工作目录,并重置kubeadm
rm -rf /etc/kubernetes/*
kubeadm reset
  1. Master节点删除工作目录,并重置kubeadm
rm -rf /etc/kubernetes/*
rm -rf ~/.kube/*
rm -rf /var/lib/etcd/*
kubeadm reset -f
  1. 重新初始化Kubernetes集群,步骤与上面一致。

8. 卸载Kubernetes和Docker

清理所有pods
kubectl delete node --all

重置k8s
kubeadm reset -f
modprobe -r ipip

清理持久化
docker volume rm etcd
rm -r /var/etcd/backups/*

卸载k8s
yum remove -y kubelet kubeadm kubectl

然后看看docker ps -a还有没有容器,有就删除
看看docker volume ls有木有数据卷,有就删除
再到docker images把镜像都删除
杀死运行的容器:

docker kill $(docker ps -a -q)

删除所有容器:

docker rm $(docker ps -a -q)

强制删除所有镜像:

docker rmi -f $(docker images -q)


yum remove -y docker-ce docker-ce-cli containerd.io

#删除配置文件
rm -rf /var/etcd
rm -rf /var/lib/kubelet/
rm -rf /var/lib/rancher/
rm -rf /run/kubernetes/
rm /var/lib/kubelet/* -rf
rm /etc/kubernetes/* -rf
rm /var/lib/rancher/* -rf
rm /var/lib/etcd/* -rf
rm /var/lib/cni/* -rf

参考感谢
Docker基本概念 · 语雀 (yuque.com)
Kubernetes基础概念 · 语雀 (yuque.com)
k8s1.20.1 集群安装 - 简书 (jianshu.com)
(11条消息) Kubernetes(K8S)集群部署搭建图文教程(最全)_果子哥丶的博客-CSDN博客_k8s集群部署
(11条消息) 从零搭建kubernetes v1.18.6集群_Dream_it_possible!的博客-CSDN博客
Web基础配置篇(十七): Kubernetes dashboard安装配置 - 知乎 (zhihu.com)
(11条消息) 卸载k8s和docker_mayi_xiaochuan的博客-CSDN博客
(11条消息) 【学习笔记 - Kubernetes(k8s)】Kubernetes 集群卸载清理_Alan • Lee的博客-CSDN博客
https://blog.csdn.net/cojn52/article/details/109449828