二进制安装Kubernetes(k8s)v1.36.0

介绍

https://github.com/cby-chen/Kubernetes 开源不易,帮忙点个star,谢谢了

kubernetes(k8s)二进制高可用安装部署,支持IPv4+IPv6双栈。

强烈建议在Github上查看文档 !!!

Github出问题会更新文档,并且后续尽可能第一时间更新新版本文档 !!!

手动项目地址:https://github.com/cby-chen/Kubernetes

1.环境

主机名称 IP地址 说明 软件
192.168.1.60 外网节点 下载各种所需安装包
Master01 192.168.1.31 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginx
Master02 192.168.1.32 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginx
Master03 192.168.1.33 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginx
Node01 192.168.1.34 node节点 kubelet、kube-proxy、nfs-client、nginx
Node02 192.168.1.35 node节点 kubelet、kube-proxy、nfs-client、nginx
192.168.1.36 VIP

详细版本

软件 版本
cni_plugins_version v1.9.1
cri_containerd_cni_version 2.3.0
crictl_version v1.36.0
cri_dockerd_version 0.4.3
etcd_version v3.6.11
cfssl_version 1.6.5
kubernetes_server_version 1.36.0
docker_version 29.4.3
runc_version 1.5.0
kernel_version 6.16.4
helm_version 4.1.4
nginx_version 1.30.0

网段

IPv4

物理主机:192.168.1.0/24

service:10.96.0.0/16

pod:172.16.0.0/16

IPv6

物理主机:fc00::/8

service:fd00:1111::/112

pod:fd00:2222::/112


安装包已经整理好:https://github.com/cby-chen/Kubernetes/releases/download/v1.36.0/kubernetes-v1.36.0.tar

1.1.k8s基础系统环境配置

1.2.配置IP

1
详见完整版 https://github.com/cby-chen/Kubernetes

1.3.设置主机名

1
2
3
4
5
6
7
8
9
10
11
12
13
hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-master02
hostnamectl set-hostname k8s-master03
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02

# 参数解释
#
# 参数: set-hostname
# 解释: 这是hostnamectl命令的一个参数,用于设置系统的主机名。
#
# 参数: k8s-master01
# 解释: 这是要设置的主机名,将系统的主机名设置为"k8s-master01"

1.4.配置yum源

1
详见完整版 https://github.com/cby-chen/Kubernetes

1.5.安装一些必备工具

1
2
3
4
5
# 对于 Ubuntu
apt update && apt upgrade -y && apt install -y wget psmisc vim net-tools nfs-kernel-server telnet lvm2 git tar curl

# 对于 CentOS 7 & CentOS 8 & CentOS 9 & CentOS 10
yum update -y && yum -y install wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git tar curl

1.5.1 下载离线所需文件(可选)

在互联网服务器上安装一个一模一样的系统进行下载所需包

CentOS7
1
详见完整版 https://github.com/cby-chen/Kubernetes
CentOS8
1
详见完整版 https://github.com/cby-chen/Kubernetes
CentOS9 & CentOS10
1
详见完整版 https://github.com/cby-chen/Kubernetes
Ubuntu 下载包和依赖
1
详见完整版 https://github.com/cby-chen/Kubernetes

1.6.选择性下载需要工具

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
#!/bin/bash

# 查看版本地址:
#
# https://github.com/containernetworking/plugins/releases/
# https://github.com/containerd/containerd/releases/
# https://github.com/kubernetes-sigs/cri-tools/releases/
# https://github.com/Mirantis/cri-dockerd/releases/
# https://github.com/etcd-io/etcd/releases/
# https://github.com/cloudflare/cfssl/releases/
# https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
# https://download.docker.com/linux/static/stable/x86_64/
# https://github.com/opencontainers/runc/releases/
# https://github.com/helm/helm/tags
# http://nginx.org/download/

# Version numbers
cni_plugins_version='v1.9.1'
cri_containerd_cni_version='2.3.0'
crictl_version='v1.36.0'
cri_dockerd_version='0.4.3'
etcd_version='v3.6.11'
cfssl_version='1.6.5'
kubernetes_server_version='1.36.0'
docker_version='29.4.3'
runc_version='1.5.0'
kernel_version='5.4.278'
helm_version='4.1.4'
nginx_version='1.30.0'

# URLs
base_url='https://github.com'
kernel_url="http://mirrors.tuna.tsinghua.edu.cn/elrepo/kernel/el7/x86_64/RPMS/kernel-lt-${kernel_version}-1.el7.elrepo.x86_64.rpm"
runc_url="${base_url}/opencontainers/runc/releases/download/v${runc_version}/runc.amd64"
docker_url="https://mirrors.ustc.edu.cn/docker-ce/linux/static/stable/x86_64/docker-${docker_version}.tgz"
cni_plugins_url="${base_url}/containernetworking/plugins/releases/download/${cni_plugins_version}/cni-plugins-linux-amd64-${cni_plugins_version}.tgz"
cri_containerd_cni_url="${base_url}/containerd/containerd/releases/download/v${cri_containerd_cni_version}/containerd-${cri_containerd_cni_version}-linux-amd64.tar.gz"
crictl_url="${base_url}/kubernetes-sigs/cri-tools/releases/download/${crictl_version}/crictl-${crictl_version}-linux-amd64.tar.gz"
cri_dockerd_url="${base_url}/Mirantis/cri-dockerd/releases/download/v${cri_dockerd_version}/cri-dockerd-${cri_dockerd_version}.amd64.tgz"
etcd_url="${base_url}/etcd-io/etcd/releases/download/${etcd_version}/etcd-${etcd_version}-linux-amd64.tar.gz"
cfssl_url="${base_url}/cloudflare/cfssl/releases/download/v${cfssl_version}/cfssl_${cfssl_version}_linux_amd64"
cfssljson_url="${base_url}/cloudflare/cfssl/releases/download/v${cfssl_version}/cfssljson_${cfssl_version}_linux_amd64"
helm_url="https://mirrors.huaweicloud.com/helm/v${helm_version}/helm-v${helm_version}-linux-amd64.tar.gz"
kubernetes_server_url="https://dl.k8s.io/v${kubernetes_server_version}/kubernetes-server-linux-amd64.tar.gz"
nginx_url="http://nginx.org/download/nginx-${nginx_version}.tar.gz"

# Download packages
packages=(
# $kernel_url
$runc_url
$docker_url
$cni_plugins_url
$cri_containerd_cni_url
$crictl_url
$cri_dockerd_url
$etcd_url
$cfssl_url
$cfssljson_url
$helm_url
$kubernetes_server_url
$nginx_url
)

for package_url in "${packages[@]}"; do
filename=$(basename "$package_url")
if curl --parallel --parallel-immediate -k -L -C - -o "$filename" "$package_url"; then
echo "Downloaded $filename"
else
echo "Failed to download $filename"
exit 1
fi
done

1.7.关闭防火墙

1
2
# Ubuntu忽略,CentOS执行
systemctl disable --now firewalld

1.8.关闭SELinux

1
2
3
# Ubuntu忽略,CentOS执行
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

1.9.关闭交换分区

1
2
3
4
5
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a && sysctl -w vm.swappiness=0

cat /etc/fstab
# /dev/mapper/centos-swap swap swap defaults 0 0

1.10.网络配置(俩种方式二选一)

1
2
3
4
5
6
7
8
9
10
11
12
# Ubuntu忽略,CentOS执行,CentOS9不支持方式一

# 方式一
# systemctl disable --now NetworkManager
# systemctl start network && systemctl enable network

# 方式二
cat > /etc/NetworkManager/conf.d/calico.conf << EOF
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan*;interface-name:kube*
EOF
systemctl restart NetworkManager

1.11.进行时间同步

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# 服务端
# apt install chrony -y
yum install chrony -y
cat > /etc/chrony.conf << EOF
pool ntp.aliyun.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.1.0/24
local stratum 10
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF

systemctl restart chronyd ; systemctl enable chronyd

# 客户端
# apt install chrony -y
yum install chrony -y
cat > /etc/chrony.conf << EOF
pool 192.168.1.31 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF

systemctl restart chronyd ; systemctl enable chronyd

#使用客户端进行验证
chronyc sources -v

1.12.配置ulimit

1
2
3
4
5
6
7
8
9
ulimit -SHn 65535
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF

1.13.配置免密登录

1
2
3
4
5
6
7
8
# apt install -y sshpass
yum install -y sshpass
ssh-keygen -f /root/.ssh/id_rsa -P ''
export IP="192.168.1.31 192.168.1.32 192.168.1.33 192.168.1.34 192.168.1.35"
export SSHPASS=123123
for HOST in $IP;do
sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST
done

1.14.添加启用源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# Ubuntu忽略,CentOS执行

# 为 RHEL-10或 CentOS-10配置源
dnf install https://www.elrepo.org/elrepo-release-10.el10.elrepo.noarch.rpm -y
sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo
sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo

# 为 RHEL-9或 CentOS-9配置源
dnf install https://www.elrepo.org/elrepo-release-9.el9.elrepo.noarch.rpm -y
sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo
sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo

# 为 RHEL-8或 CentOS-8配置源
dnf install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y
sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo
sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo

# 为 RHEL-7 SL-7 或 CentOS-7 安装 ELRepo
yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y
sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo
sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo

# 查看可用安装包
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

1.15.升级内核至4.18版本以上(若内核版本不够执行升级)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# Ubuntu忽略,CentOS执行

# 安装最新的内核
# 我这里选择的是稳定版kernel-ml 如需更新长期维护版本kernel-lt
yum -y --enablerepo=elrepo-kernel install kernel-ml

# 查看已安装那些内核
rpm -qa | grep kernel

# 查看默认内核
grubby --default-kernel

# 若不是最新的使用命令设置
grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo)

# 重启生效
reboot

# v8 整合命令为:
yum install https://www.elrepo.org/elrepo-release-9.el9.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-lt -y ; grubby --default-kernel ; reboot

# v8 整合命令为:
yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-lt -y ; grubby --default-kernel ; reboot

# v7 整合命令为:
yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-lt -y ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot

# 离线版本
yum install -y /root/cby/kernel-lt-*-1.el7.elrepo.x86_64.rpm ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot

1.16.安装ipvsadm

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# 对于CentOS7离线安装
# yum install /root/centos7/ipset-*.el7.x86_64.rpm /root/centos7/lm_sensors-libs-*.el7.x86_64.rpm /root/centos7/ipset-libs-*.el7.x86_64.rpm /root/centos7/sysstat-*.el7_9.x86_64.rpm /root/centos7/ipvsadm-*.el7.x86_64.rpm -y

# 对于 Ubuntu
# apt install ipvsadm ipset sysstat conntrack -y

# 对于 CentOS
yum install ipvsadm ipset sysstat conntrack libseccomp -y
cat >> /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

systemctl restart systemd-modules-load.service

lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 237568 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 217088 3 nf_nat,nft_ct,ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs

1.17.修改内核参数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384

net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
EOF

sysctl --system

1.18.所有节点配置hosts本地解析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cat > /etc/hosts <<EOF
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.31 k8s-master01
192.168.1.32 k8s-master02
192.168.1.33 k8s-master03
192.168.1.34 k8s-node01
192.168.1.35 k8s-node02
192.168.1.36 lb-vip

fc00::31 k8s-master01
fc00::32 k8s-master02
fc00::33 k8s-master03
fc00::34 k8s-node01
fc00::35 k8s-node02
EOF

2.k8s基本组件安装

注意 : 2.1 和 2.2 二选其一即可

说明:使用docker作为Runtime会出现报错,无法进入容器内部,该问题的是cri-dockerd不兼容导致的,同时官方已经弃用docker,建议使用Containerd作为Runtime,例如:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# docker作为Runtime

[root@k8s-master01 ~]# kubectl exec -ti busybox -- sh
Defaulted container "busybox" out of: busybox, debugger-8sq56 (ephem)
error: Internal error occurred: error sending request: Post "//[::]:39823/cri/exec/JBk5X7Cx": http: server gave HTTP response to HTTPS client
[root@k8s-master01 ~]#

# Containerd作为Runtime

[root@k8s-master01 ~]# kubectl exec -ti busybox -- sh
Defaulted container "busybox" out of: busybox, debugger-8sq56 (ephem)
/ #
/ #
[root@k8s-master01 ~]#

2.1.安装Containerd作为Runtime

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# https://github.com/containernetworking/plugins/releases/
# wget https://github.com/containernetworking/plugins/releases/download/v1.7.1/cni-plugins-linux-amd64-v1.7.1.tgz

cd cby/

#创建cni插件所需目录
mkdir -p /etc/cni/net.d /opt/cni/bin
#解压cni二进制包
tar xf cni-plugins-linux-amd64-v*.tgz -C /opt/cni/bin/

# https://github.com/containerd/containerd/releases/
# wget https://github.com/containerd/containerd/releases/download/v2.0.5/containerd-2.0.5-linux-amd64.tar.gz

#解压
tar -xzf containerd-*-linux-amd64.tar.gz -C /usr/local/

#创建服务启动文件
cat > /etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF

2.1.1配置Containerd所需的模块

1
2
3
4
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

2.1.2加载模块

1
systemctl restart systemd-modules-load.service

2.1.3配置Containerd所需的内核

1
2
3
4
5
6
7
8
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

# 加载内核
sysctl --system

2.1.4创建Containerd的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# 创建默认配置文件
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml

# 修改Containerd的配置文件

# sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
# cat /etc/containerd/config.toml | grep SystemdCgroup

# 沙箱pause镜像
sed -i "s#registry.k8s.io#registry.aliyuncs.com/chenby#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep sandbox

# 配置加速器
[root@k8s-master01 ~]# vim /etc/containerd/config.toml
[root@k8s-master01 ~]# cat /etc/containerd/config.toml | grep certs.d -C 5

[plugins.'io.containerd.cri.v1.images'.pinned_images]
sandbox = 'registry.aliyuncs.com/chenby/pause:3.10'

[plugins.'io.containerd.cri.v1.images'.registry]
config_path = '/etc/containerd/certs.d'

[plugins.'io.containerd.cri.v1.images'.image_decryption]
key_model = 'node'

[plugins.'io.containerd.cri.v1.runtime']
[root@k8s-master01 ~]#


mkdir /etc/containerd/certs.d/docker.io -pv
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://jockerhub.com"]
capabilities = ["pull", "resolve"]
EOF

# 配置 GFW 代理
mkdir -p /etc/systemd/system/containerd.service.d
touch /etc/systemd/system/containerd.service.d/http-proxy.conf
tee /etc/systemd/system/containerd.service.d/http-proxy.conf << EOF
[Service]
Environment="HTTP_PROXY=http://192.168.1.100:7890"
Environment="HTTPS_PROXY=http://192.168.1.100:7890"
Environment="NO_PROXY=localhost,127.0.0.1,containerd"
EOF

2.1.5启动并设置为开机启动

1
2
3
4
5
6
systemctl daemon-reload
systemctl enable --now containerd.service
systemctl stop containerd.service
systemctl start containerd.service
systemctl restart containerd.service
systemctl status containerd.service

2.1.6配置crictl客户端连接的运行时位置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# https://github.com/kubernetes-sigs/cri-tools/releases/
# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.36.0/crictl-v1.36.0-linux-amd64.tar.gz

#解压
tar xf crictl-v*-linux-amd64.tar.gz -C /usr/bin/
#生成配置文件
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

#测试
systemctl restart containerd
crictl info

2.2 安装docker作为Runtime

2.2.1 解压docker程序

1
2
3
4
5
6
7
# 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/
# wget https://mirrors.ustc.edu.cn/docker-ce/linux/static/stable/x86_64/docker-27.4.0.tgz

#解压
tar xf docker-*.tgz
#拷贝二进制文件
cp docker/* /usr/bin/

2.2.2 创建containerd的service文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#创建containerd的service文件,并且启动
cat >/etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=1048576
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF

# 设置开机自启
systemctl enable --now containerd.service

2.2.3 准备docker的service文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#准备docker的service文件
cat > /etc/systemd/system/docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=containerd.service

[Service]
Type=notify
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target
EOF

2.2.4 准备docker的socket文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#准备docker的socket文件
cat > /etc/systemd/system/docker.socket <<EOF
[Unit]
Description=Docker Socket for the API

[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target
EOF

2.2.5 配置加速器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# 配置加速器
mkdir /etc/docker/ -pv
cat >/etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": [
"https://jockerhub.com"
],
"max-concurrent-downloads": 10,
"log-driver": "json-file",
"log-level": "warn",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"data-root": "/var/lib/docker",
"proxies": {
"http-proxy": "http://192.168.1.100:7890",
"https-proxy": "http://192.168.1.100:7890",
"no-proxy": "localhost"
}
}
EOF

2.2.6 启动docker

1
2
3
4
5
6
7
8
9
groupadd docker
systemctl daemon-reload
systemctl enable --now docker.service
systemctl enable --now docker.socket
systemctl stop docker.service
systemctl start docker.service
systemctl restart docker.service
systemctl status docker.service
docker info

2.2.7 解压cri-docker

1
2
3
4
5
6
7
8
9
# 由于1.24以及更高版本不支持docker所以安装cri-docker
# 下载cri-docker
# https://github.com/Mirantis/cri-dockerd/releases/
# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.16/cri-dockerd-0.3.16.amd64.tgz

# 解压cri-docker
tar xvf cri-dockerd-*.amd64.tgz
cp -r cri-dockerd/ /usr/bin/
chmod +x /usr/bin/cri-dockerd/cri-dockerd

2.2.8 写入启动cri-docker配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# 写入启动配置文件
cat > /usr/lib/systemd/system/cri-docker.service <<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd/cri-dockerd --ipv6-dual-stack --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.10
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# 关键配置:允许 Socket 激活
StartLimitBurst=3

# 资源限制
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

2.2.9 写入cri-docker的socket配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 写入socket配置文件
cat > /usr/lib/systemd/system/cri-docker.socket <<EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service

[Socket]
ListenStream=/var/run/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target
EOF

2.2.10 启动cri-docker

1
2
3
4
5
6
7
8
systemctl daemon-reload
systemctl enable --now cri-docker.socket
systemctl restart cri-docker.socket
systemctl restart cri-docker.service
systemctl status cri-docker.socket
systemctl status cri-docker.service
systemctl stop cri-docker.socket
systemctl stop cri-docker.service

2.3.k8s与etcd下载及安装(仅在master01操作)

2.3.1解压k8s安装包

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# 下载安装包
# https://github.com/etcd-io/etcd/releases/
# https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
#
# wget https://github.com/etcd-io/etcd/releases/download/v3.5.21/etcd-v3.5.21-linux-amd64.tar.gz
# wget https://cdn.dl.k8s.io/release/v1.36.0/kubernetes-server-linux-amd64.tar.gz

# 解压k8s安装文件
cd cby
tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

# 解压etcd安装文件
tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/

# 查看/usr/local/bin下内容
[root@localhost cby]# ll /usr/local/bin/
总计 510444
-rwxr-xr-x. 1 root root 48051624 4月15日 01:36 containerd
-rwxr-xr-x. 1 root root 8339640 4月15日 01:36 containerd-shim-runc-v2
-rwxr-xr-x. 1 root root 21758305 4月15日 01:36 containerd-stress
-rwxr-xr-x. 1 root root 24961377 4月15日 01:36 ctr
-rwxr-xr-x. 1 1000 1000 26607800 4月 2日 02:30 etcd
-rwxr-xr-x. 1 1000 1000 17469624 4月 2日 02:30 etcdctl
-rwxr-xr-x. 1 root root 88309922 4月22日 21:59 kube-apiserver
-rwxr-xr-x. 1 root root 74285218 4月22日 21:59 kube-controller-manager
-rwxr-xr-x. 1 root root 59502754 4月22日 21:59 kubectl
-rwxr-xr-x. 1 root root 60064009 4月22日 21:59 kubelet
-rwxr-xr-x. 1 root root 44200098 4月22日 21:59 kube-proxy
-rwxr-xr-x. 1 root root 49098914 4月22日 21:59 kube-scheduler
[root@localhost cby]#

2.3.2查看版本

1
2
3
4
5
6
[root@k8s-master01 ~]#  kubelet --version
Kubernetes v1.36.0
[root@k8s-master01 ~]# etcdctl version
etcdctl version: 3.6.11
API version: 3.6
[root@k8s-master01 ~]#

2.3.3将组件发送至其他k8s节点

1
2
3
4
5
6
7
8
9
10
11
Master='k8s-master02 k8s-master03'
Work='k8s-node01 k8s-node02'

# 拷贝master组件
for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done

# 拷贝work组件
for NODE in $Work; do echo $NODE; scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done

# 所有节点执行
mkdir -p /opt/cni/bin

2.3创建证书相关文件

1
2
3
4
5
# 请查看Github仓库 或者进行获取已经打好的包
# 可以根据下文3.x进行手动部署操作
https://github.com/cby-chen/Kubernetes/
https://github.com/cby-chen/Kubernetes/tags
https://github.com/cby-chen/Kubernetes/releases/download/v1.36.0/kubernetes-v1.36.0.tar

3.相关证书生成

1
2
3
4
5
6
7
8
9
10
# master01节点下载证书生成工具
# wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssl_1.6.5_linux_amd64" -O /usr/local/bin/cfssl
# wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssljson_1.6.5_linux_amd64" -O /usr/local/bin/cfssljson

# 软件包内有
cp cfssl_*_linux_amd64 /usr/local/bin/cfssl
cp cfssljson_*_linux_amd64 /usr/local/bin/cfssljson

# 添加执行权限
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

3.1.生成etcd证书

特别说明除外,以下操作在所有master节点操作

3.1.1所有master节点创建证书存放目录

1
mkdir /etc/etcd/ssl -p

3.1.2master01节点生成etcd证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# 写入生成证书所需的配置文件
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "876000h"
}
}
}
}
EOF

cat > etcd-ca-csr.json << EOF
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "etcd",
"OU": "Etcd Security"
}
],
"ca": {
"expiry": "876000h"
}
}
EOF
# 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来)
# 若没有IPv6 可删除可保留

cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca

cat > etcd-csr.json << EOF
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "etcd",
"OU": "Etcd Security"
}
]
}
EOF


cfssl gencert \
-ca=/etc/etcd/ssl/etcd-ca.pem \
-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.31,192.168.1.32,192.168.1.33,::1 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

3.1.3将证书复制到其他节点

1
2
Master='k8s-master02 k8s-master03'
for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done

3.2.生成k8s相关证书

特别说明除外,以下操作在所有master节点操作

3.2.1 所有k8s节点创建证书存放目录

1
mkdir -p /etc/kubernetes/pki

3.2.2 master01节点生成k8s证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
# 写入生成证书所需的配置文件
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
],
"ca": {
"expiry": "876000h"
}
}
EOF


cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca

cat > apiserver-csr.json << EOF
{
"CN": "kube-apiserver",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
]
}
EOF

# 生成一个根证书 ,多写了一些IP作为预留IP,为将来添加node做准备
# 10.96.0.1是service网段的第一个地址,需要计算,192.168.1.36为高可用vip地址
# 若没有IPv6 可删除可保留

cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-hostname=10.96.0.1,192.168.1.36,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,z.oiox.cn,192.168.1.31,192.168.1.32,192.168.1.33,192.168.1.34,192.168.1.35,192.168.1.36,192.168.1.37,192.168.1.38,192.168.1.39,192.168.1.40,::1 \
-profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

3.2.3 生成apiserver聚合证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
cat > front-proxy-ca-csr.json  << EOF 
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"ca": {
"expiry": "876000h"
}
}
EOF

cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca

cat > front-proxy-client-csr.json << EOF
{
"CN": "front-proxy-client",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF

cfssl gencert \
-ca=/etc/kubernetes/pki/front-proxy-ca.pem \
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

3.2.4 生成controller-manage的证书

在《5.高可用配置》选择使用那种高可用方案
若使用 haproxy、keepalived 那么为 --server=https://192.168.1.36:9443
若使用 nginx方案,那么为 --server=https://127.0.0.1:8443

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
cat > manager-csr.json << EOF 
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-controller-manager",
"OU": "Kubernetes-manual"
}
]
}
EOF


cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager


# 设置一个集群项
# 在《5.高可用配置》选择使用那种高可用方案
# 若使用 haproxy、keepalived 那么为 `--server=https://192.168.1.36:9443`
# 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443`
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:8443 \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

# 设置一个环境项,一个上下文
kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

# 设置一个用户项
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/controller-manager.pem \
--client-key=/etc/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

# 设置默认环境
kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

3.2.5 生成kube-scheduler的证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
cat > scheduler-csr.json << EOF 
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-scheduler",
"OU": "Kubernetes-manual"
}
]
}
EOF


cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler


# 在《5.高可用配置》选择使用那种高可用方案
# 若使用 haproxy、keepalived 那么为 `--server=https://192.168.1.36:9443`
# 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443`

kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:8443 \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig


kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.pem \
--client-key=/etc/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig


kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig

kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig

3.2.6 生成admin的证书配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
cat > admin-csr.json << EOF 
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:masters",
"OU": "Kubernetes-manual"
}
]
}
EOF


cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin

# 在《5.高可用配置》选择使用那种高可用方案
# 若使用 haproxy、keepalived 那么为 `--server=https://192.168.1.36:9443`
# 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443`

kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:8443 \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
# 该命令用于配置一个名为"kubernetes"的集群,并将其应用到/etc/kubernetes/scheduler.kubeconfig文件中。
#
# 该命令的解释如下:
# - `kubectl config set-cluster kubernetes`: 设置一个集群并命名为"kubernetes"
# - `--certificate-authority=/etc/kubernetes/pki/ca.pem`: 指定集群使用的证书授权机构的路径。
# - `--embed-certs=true`: 该标志指示将证书嵌入到生成的kubeconfig文件中。
# - `--server=https://127.0.0.1:8443`: 指定集群的 API server 位置。
# - `--kubeconfig=/etc/kubernetes/admin.kubeconfig`: 指定要保存 kubeconfig 文件的路径和名称。

kubectl config set-credentials kubernetes-admin \
--client-certificate=/etc/kubernetes/pki/admin.pem \
--client-key=/etc/kubernetes/pki/admin-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/admin.kubeconfig

kubectl config set-context kubernetes-admin@kubernetes \
--cluster=kubernetes \
--user=kubernetes-admin \
--kubeconfig=/etc/kubernetes/admin.kubeconfig

kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
# 上述命令是使用`kubectl`命令来配置Kubernetes集群中的调度器组件。
#
# `kubectl config use-context`命令用于切换`kubectl`当前使用的上下文。上下文是Kubernetes集群、用户和命名空间的组合,用于确定`kubectl`的连接目标。下面解释这个命令的不同部分:
#
# - `kubernetes-admin@kubernetes`是一个上下文名称。它指定了使用`kubernetes-admin`用户和`kubernetes`命名空间的系统级别上下文。系统级别上下文用于操作Kubernetes核心组件。
#
# - `--kubeconfig=/etc/kubernetes/admin.kubeconfig`用于指定Kubernetes配置文件的路径。Kubernetes配置文件包含连接到Kubernetes集群所需的身份验证和连接信息。
#
# 通过运行以上命令,`kubectl`将使用指定的上下文和配置文件,以便在以后的命令中能正确地与Kubernetes集群中的调度器组件进行交互。

3.2.7 创建kube-proxy证书

在《5.高可用配置》选择使用那种高可用方案
若使用 haproxy、keepalived 那么为 --server=https://192.168.1.36:9443
若使用 nginx方案,那么为 --server=https://127.0.0.1:8443

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
cat > kube-proxy-csr.json  << EOF 
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-proxy",
"OU": "Kubernetes-manual"
}
]
}
EOF

cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy

# 在《5.高可用配置》选择使用那种高可用方案
# 若使用 haproxy、keepalived 那么为 `--server=https://192.168.1.36:9443`
# 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443`

kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:8443 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig


kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/pki/kube-proxy.pem \
--client-key=/etc/kubernetes/pki/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

kubectl config set-context kube-proxy@kubernetes \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

kubectl config use-context kube-proxy@kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

3.2.8 创建ServiceAccount Key ——secret

1
2
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

3.2.9 将证书发送到其他master节点

1
2
3
4
#其他节点创建目录
# mkdir /etc/kubernetes/pki/ -p

for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done

3.2.10 查看证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[root@k8s-master01 cby]# ll /etc/kubernetes/pki/
总计 104
-rw-r--r--. 1 root root 1025 4月26日 19:22 admin.csr
-rw-------. 1 root root 1679 4月26日 19:22 admin-key.pem
-rw-r--r--. 1 root root 1444 4月26日 19:22 admin.pem
-rw-r--r--. 1 root root 1415 4月26日 19:22 apiserver.csr
-rw-------. 1 root root 1679 4月26日 19:22 apiserver-key.pem
-rw-r--r--. 1 root root 1805 4月26日 19:22 apiserver.pem
-rw-r--r--. 1 root root 1070 4月26日 19:22 ca.csr
-rw-------. 1 root root 1675 4月26日 19:22 ca-key.pem
-rw-r--r--. 1 root root 1363 4月26日 19:22 ca.pem
-rw-r--r--. 1 root root 1082 4月26日 19:22 controller-manager.csr
-rw-------. 1 root root 1679 4月26日 19:22 controller-manager-key.pem
-rw-r--r--. 1 root root 1501 4月26日 19:22 controller-manager.pem
-rw-r--r--. 1 root root 940 4月26日 19:22 front-proxy-ca.csr
-rw-------. 1 root root 1675 4月26日 19:22 front-proxy-ca-key.pem
-rw-r--r--. 1 root root 1094 4月26日 19:22 front-proxy-ca.pem
-rw-r--r--. 1 root root 903 4月26日 19:22 front-proxy-client.csr
-rw-------. 1 root root 1675 4月26日 19:22 front-proxy-client-key.pem
-rw-r--r--. 1 root root 1188 4月26日 19:22 front-proxy-client.pem
-rw-r--r--. 1 root root 1045 4月26日 19:22 kube-proxy.csr
-rw-------. 1 root root 1679 4月26日 19:22 kube-proxy-key.pem
-rw-r--r--. 1 root root 1464 4月26日 19:22 kube-proxy.pem
-rw-------. 1 root root 1704 4月26日 19:23 sa.key
-rw-r--r--. 1 root root 451 4月26日 19:23 sa.pub
-rw-r--r--. 1 root root 1058 4月26日 19:22 scheduler.csr
-rw-------. 1 root root 1675 4月26日 19:22 scheduler-key.pem
-rw-r--r--. 1 root root 1476 4月26日 19:22 scheduler.pem
[root@k8s-master01 cby]#

[root@k8s-master01 cby]# ll /etc/kubernetes/pki/ | wc -l
27
[root@k8s-master01 cby]#

4.k8s系统组件配置

4.1.etcd配置

1
详见完整版 https://github.com/cby-chen/Kubernetes

4.1.1master01配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.1.31:2380'
listen-client-urls: 'https://192.168.1.31:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.1.31:2380'
advertise-client-urls: 'https://192.168.1.31:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.1.31:2380,k8s-master02=https://192.168.1.32:2380,k8s-master03=https://192.168.1.33:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.1.2master02配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.1.32:2380'
listen-client-urls: 'https://192.168.1.32:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.1.32:2380'
advertise-client-urls: 'https://192.168.1.32:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.1.31:2380,k8s-master02=https://192.168.1.32:2380,k8s-master03=https://192.168.1.33:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.1.3master03配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.1.33:2380'
listen-client-urls: 'https://192.168.1.33:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.1.33:2380'
advertise-client-urls: 'https://192.168.1.33:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.1.31:2380,k8s-master02=https://192.168.1.32:2380,k8s-master03=https://192.168.1.33:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.2.创建service(所有master节点操作)

4.2.1创建etcd.service并启动

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
cat > /usr/lib/systemd/system/etcd.service << EOF

[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
TimeoutSec=0
RestartSec=60
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity

[Install]
WantedBy=multi-user.target
Alias=etcd3.service

EOF

4.2.2创建etcd证书目录

1
2
3
4
5
6
7
mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/

systemctl daemon-reload
systemctl enable --now etcd.service
systemctl restart etcd.service
systemctl status etcd.service

4.2.3查看etcd状态

1
2
3
4
5
6
7
8
9
etcdctl --endpoints="192.168.1.33:2379,192.168.1.32:2379,192.168.1.31:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table

+-------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
| ENDPOINT | ID | VERSION | STORAGE VERSION | DB SIZE | IN USE | PERCENTAGE NOT IN USE | QUOTA | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | DOWNGRADE TARGET VERSION | DOWNGRADE ENABLED |
+-------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
| 192.168.1.33:2379 | 8065f2e59c8d68c | 3.6.11 | 3.6.0 | 20 kB | 16 kB | 20% | 2.1 GB | false | false | 3 | 13 | 13 | | | false |
| 192.168.1.32:2379 | b7b7ad6bf4db3f28 | 3.6.11 | 3.6.0 | 20 kB | 16 kB | 20% | 2.1 GB | false | false | 3 | 13 | 13 | | | false |
| 192.168.1.31:2379 | bf047bcfe3b9bf27 | 3.6.11 | 3.6.0 | 20 kB | 16 kB | 20% | 2.1 GB | true | false | 3 | 13 | 13 | | | false |
+-------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+

5.高可用配置(在Master服务器上操作)

注意 5.1.1 和5.1.2 二选一即可*

选择使用那种高可用方案,同时可以俩种都选用,实现内外兼顾的效果,比如:
5.1 的 NGINX方案实现集群内的高可用
5.2 的 haproxy、keepalived 方案实现集群外访问

在《3.2.生成k8s相关证书》

若使用 nginx方案,那么为 --server=https://127.0.0.1:8443
若使用 haproxy、keepalived 那么为 --server=https://192.168.1.36:9443

5.1 NGINX高可用方案

5.1.1 进行编译

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 安装编译环境
yum install gcc -y

# 下载解压nginx二进制文件
# wget http://nginx.org/download/nginx-1.25.3.tar.gz
tar xvf nginx-*.tar.gz
cd nginx-*

# 进行编译
./configure --with-stream --without-http --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
make && make install

# 拷贝编译好的nginx
node='k8s-master02 k8s-master03 k8s-node01 k8s-node02'
for NODE in $node; do scp -r /usr/local/nginx/ $NODE:/usr/local/nginx/; done

5.1.2 写入启动配置

在所有主机上执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
# 写入nginx配置文件
cat > /usr/local/nginx/conf/kube-nginx.conf <<EOF
worker_processes 1;
events {
worker_connections 1024;
}
stream {
upstream backend {
least_conn;
hash $remote_addr consistent;
server 192.168.1.31:6443 max_fails=3 fail_timeout=30s;
server 192.168.1.32:6443 max_fails=3 fail_timeout=30s;
server 192.168.1.33:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 127.0.0.1:8443;
proxy_connect_timeout 1s;
proxy_pass backend;
}
}
EOF

# 写入启动配置文件
cat > /etc/systemd/system/kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=forking
ExecStartPre=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -t
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx
ExecReload=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

# 设置开机自启
systemctl daemon-reload
systemctl enable --now kube-nginx.service
systemctl restart kube-nginx.service
systemctl status kube-nginx.service

5.2 keepalived和haproxy 高可用方案

5.2.1安装keepalived和haproxy服务

1
2
3
4
systemctl disable --now firewalld
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
yum -y install keepalived haproxy

5.2.2修改haproxy配置文件(配置文件一样)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
cat >/etc/haproxy/haproxy.cfg<<"EOF"
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s

defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s


frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor

frontend k8s-master
bind 0.0.0.0:9443
bind 127.0.0.1:9443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master


backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 192.168.1.31:6443 check
server k8s-master02 192.168.1.32:6443 check
server k8s-master03 192.168.1.33:6443 check
EOF

参数

1
详见完整版 https://github.com/cby-chen/Kubernetes

5.2.3Master01配置keepalived master节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
# 注意网卡名
interface ens32
mcast_src_ip 192.168.1.31
virtual_router_id 51
priority 100
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.1.36
}
track_script {
chk_apiserver
} }

EOF

5.2.4Master02配置keepalived backup节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1

}
vrrp_instance VI_1 {
state BACKUP
# 注意网卡名
interface ens32
mcast_src_ip 192.168.1.32
virtual_router_id 51
priority 80
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.1.36
}
track_script {
chk_apiserver
} }

EOF

5.2.5Master03配置keepalived backup节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1

}
vrrp_instance VI_1 {
state BACKUP
# 注意网卡名
interface ens32
mcast_src_ip 192.168.1.33
virtual_router_id 51
priority 50
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.1.36
}
track_script {
chk_apiserver
} }

EOF

参数

1
详见完整版 https://github.com/cby-chen/Kubernetes

5.2.6健康检查脚本配置(lb主机)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
cat >  /etc/keepalived/check_apiserver.sh << EOF
#!/bin/bash

err=0
for k in \$(seq 1 3)
do
check_code=\$(pgrep haproxy)
if [[ \$check_code == "" ]]; then
err=\$(expr \$err + 1)
sleep 1
continue
else
err=0
break
fi
done

if [[ \$err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
EOF

# 给脚本授权

chmod +x /etc/keepalived/check_apiserver.sh

5.2.7启动服务

1
2
3
4
5
systemctl daemon-reload
systemctl enable --now haproxy.service
systemctl enable --now keepalived.service
systemctl status haproxy.service
systemctl status keepalived.service

5.2.8测试高可用

1
2
3
4
5
6
7
# 能ping同
[root@k8s-node02 ~]# ping 192.168.1.36

# 能telnet访问
[root@k8s-node02 ~]# telnet 192.168.1.36 9443

# 关闭主节点,看vip是否漂移到备节点

6.k8s组件配置

所有k8s节点创建以下目录

1
mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

6.1.创建apiserver(所有master节点)

6.1.1master01节点配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# 若关闭IPv6 删除 --service-cluster-ip-range 的 IPv6 即可
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--v=2 \\
--allow-privileged=true \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--advertise-address=192.168.1.31 \\
--service-cluster-ip-range=10.96.0.0/16,fd00:1111::/112 \\
--service-node-port-range=30000-32767 \\
--etcd-servers=https://192.168.1.31:2379,https://192.168.1.32:2379,https://192.168.1.33:2379 \\
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
--client-ca-file=/etc/kubernetes/pki/ca.pem \\
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/pki/sa.pub \\
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \\
--enable-bootstrap-token-auth=true \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\
--requestheader-allowed-names=aggregator \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

EOF

6.1.2master02节点配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# 若关闭IPv6 删除 --service-cluster-ip-range 的 IPv6 即可
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--v=2 \\
--allow-privileged=true \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--advertise-address=192.168.1.32 \\
--service-cluster-ip-range=10.96.0.0/16,fd00:1111::/112 \\
--service-node-port-range=30000-32767 \\
--etcd-servers=https://192.168.1.31:2379,https://192.168.1.32:2379,https://192.168.1.33:2379 \\
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
--client-ca-file=/etc/kubernetes/pki/ca.pem \\
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/pki/sa.pub \\
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\
--authorization-mode=Node,RBAC \\
--enable-bootstrap-token-auth=true \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\
--requestheader-allowed-names=aggregator \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

EOF

6.1.3master03节点配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# 若关闭IPv6 删除 --service-cluster-ip-range 的 IPv6 即可
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--v=2 \\
--allow-privileged=true \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--advertise-address=192.168.1.33 \\
--service-cluster-ip-range=10.96.0.0/16,fd00:1111::/112 \\
--service-node-port-range=30000-32767 \\
--etcd-servers=https://192.168.1.31:2379,https://192.168.1.32:2379,https://192.168.1.33:2379 \\
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
--client-ca-file=/etc/kubernetes/pki/ca.pem \\
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/pki/sa.pub \\
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\
--authorization-mode=Node,RBAC \\
--enable-bootstrap-token-auth=true \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\
--requestheader-allowed-names=aggregator \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

EOF

参数

1
详见完整版 https://github.com/cby-chen/Kubernetes

6.1.4启动apiserver(所有master节点)

1
2
3
4
5
6
7
8
9
10
11
systemctl daemon-reload
# 用于重新加载systemd管理的单位文件。当你新增或修改了某个单位文件(如.service文件、.socket文件等),需要运行该命令来刷新systemd对该文件的配置。

systemctl enable --now kube-apiserver.service
# 启用并立即启动kube-apiserver.service单元。kube-apiserver.service是kube-apiserver守护进程的systemd服务单元。

systemctl restart kube-apiserver.service
# 重启kube-apiserver.service单元,即重新启动etcd守护进程。

systemctl status kube-apiserver.service
# kube-apiserver.service单元的当前状态,包括运行状态、是否启用等信息。

6.2.配置kube-controller-manager service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# 所有master节点配置,且配置相同
# 172.16.0.0/16为pod网段,按需求设置你自己的网段

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--v=2 \\
--bind-address=0.0.0.0 \\
--root-ca-file=/etc/kubernetes/pki/ca.pem \\
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \\
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\
--leader-elect=true \\
--use-service-account-credentials=true \\
--node-monitor-grace-period=40s \\
--node-monitor-period=5s \\
--controllers=*,bootstrapsigner,tokencleaner \\
--allocate-node-cidrs=true \\
--service-cluster-ip-range=10.96.0.0/16,fd00:1111::/112 \\
--cluster-cidr=172.16.0.0/16,fd00:2222::/112 \\
--node-cidr-mask-size-ipv4=24 \\
--node-cidr-mask-size-ipv6=120 \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF



# 关闭IPv6
# 删除 --service-cluster-ip-range 中的IPv6地址
# 删除 --cluster-cidr 中的IPv6地址
# 删除 --node-cidr-mask-size-ipv6=120
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--v=2 \\
--bind-address=0.0.0.0 \\
--root-ca-file=/etc/kubernetes/pki/ca.pem \\
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \\
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\
--leader-elect=true \\
--use-service-account-credentials=true \\
--node-monitor-grace-period=40s \\
--node-monitor-period=5s \\
--controllers=*,bootstrapsigner,tokencleaner \\
--allocate-node-cidrs=true \\
--service-cluster-ip-range=10.96.0.0/16 \\
--cluster-cidr=172.16.0.0/16 \\
--node-cidr-mask-size-ipv4=24 \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF

参数

1
详见完整版 https://github.com/cby-chen/Kubernetes

6.2.1启动kube-controller-manager,并查看状态

1
2
3
4
systemctl daemon-reload
systemctl enable --now kube-controller-manager.service
systemctl restart kube-controller-manager.service
systemctl status kube-controller-manager.service

6.3.配置kube-scheduler service

6.3.1所有master节点配置,且配置相同

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--v=2 \\
--bind-address=0.0.0.0 \\
--leader-elect=true \\
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF

参数

1
详见完整版 https://github.com/cby-chen/Kubernetes

6.3.2启动并查看服务状态

1
2
3
4
systemctl daemon-reload
systemctl enable --now kube-scheduler.service
systemctl restart kube-scheduler.service
systemctl status kube-scheduler.service

7.TLS Bootstrapping配置

7.1在master01上配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# 在《5.高可用配置》选择使用那种高可用方案
# 若使用 haproxy、keepalived 那么为 `--server=https://192.168.1.36:9443`
# 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443`

kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true --server=https://127.0.0.1:8443 \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

# 可以使用这个命令进行创建token也可以使用我的
echo "$(head -c 6 /dev/urandom | md5sum | head -c 6)"."$(head -c 16 /dev/urandom | md5sum | head -c 16)"

kubectl config set-credentials tls-bootstrap-token-user \
--token=c8ad9c.2e4d610cf3e7426e \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

kubectl config set-context tls-bootstrap-token-user@kubernetes \
--cluster=kubernetes \
--user=tls-bootstrap-token-user \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

kubectl config use-context tls-bootstrap-token-user@kubernetes \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

# token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改
mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

7.2查看集群状态,没问题的话继续后续操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
# 1.28 版本只能查看到一个etcd 属于正常现象
# export ETCDCTL_API=3
# etcdctl --endpoints="192.168.1.33:2379,192.168.1.32:2379,192.168.1.31:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy ok

# 写入bootstrap-token
cat > bootstrap.secret.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-c8ad9c
namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
description: "The default bootstrap token generated by 'kubelet '."
token-id: c8ad9c
token-secret: 2e4d610cf3e7426e
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-certificate-rotation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kube-apiserver
EOF
# 切记执行,别忘记!!!
kubectl create -f bootstrap.secret.yaml

8.node节点配置

8.1.在master01上将证书复制到node节点

1
2
3
cd /etc/kubernetes/

for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done

8.2.kubelet配置

注意 : 8.2.1 和 8.2.2 需要和 上方 2.1 和 2.2 对应起来

8.2.1配置kubelet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
cat > /usr/lib/systemd/system/kubelet.service << EOF

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target firewalld.service cri-docker.service docker.socket containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet-conf.yml \\
--node-labels=node.kubernetes.io/node=


[Install]
WantedBy=multi-user.target
EOF


# IPv6示例
# 若不使用IPv6那么忽略此项即可
# 下方 --node-ip 更换为每个节点的IP即可
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target firewalld.service cri-docker.service docker.socket containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet-conf.yml \\
--node-labels=node.kubernetes.io/node= \\
--node-ip=192.168.1.31,fc00::31
[Install]
WantedBy=multi-user.target
EOF

# 自动获取IP编制
# 1. 自动获取本机 IPv4 地址 (根据实际情况,这里假设网卡名为 eth0,如果不同请修改)
# 如果你的网卡名不同,可以用 ip addr show | grep 'inet ' | grep -v '127.0.0.1' | awk '{print $2}' | cut -d/ -f1 自动获取
NODE_IPV4=$(hostname -I | awk '{print $1}')

# 2. 根据 IPv4 自动推算 IPv6 (针对你提供的 192.168.1.x -> fc00::x 规则)
# 提取最后一段数字
LAST_OCTET=$(echo $NODE_IPV4 | awk -F. '{print $4}')
NODE_IPV6="fc00::${LAST_OCTET}"

# 3. 生成 kubelet.service 文件
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target firewalld.service cri-docker.service docker.socket containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet-conf.yml \\
--node-labels=node.kubernetes.io/node= \\
--node-ip=${NODE_IPV4},${NODE_IPV6}
[Install]
WantedBy=multi-user.target
EOF

# 查看是否正确
cat /usr/lib/systemd/system/kubelet.service

8.2.2所有k8s节点创建kubelet的配置文件

当使用docker作为Runtime

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
cat > /etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
enableServer: true
staticPodPath: /etc/kubernetes/manifests
syncFrequency: 1m0s
fileCheckFrequency: 20s
httpCheckFrequency: 20s
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
rotateCertificates: true
authentication:
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
webhook:
enabled: true
cacheTTL: 2m0s
anonymous:
enabled: true
authorization:
mode: AlwaysAllow
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
registryPullQPS: 5
registryBurst: 10
eventRecordQPS: 50
eventBurst: 100
enableDebuggingHandlers: true
healthzPort: 10248
healthzBindAddress: 127.0.0.1
oomScoreAdj: -999
clusterDomain: cluster.local
clusterDNS:
- 10.0.0.10
streamingConnectionIdleTimeout: 4h0m0s
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 5m0s
nodeLeaseDurationSeconds: 40
imageMinimumGCAge: 2m0s
imageMaximumGCAge: 0s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m0s
cgroupsPerQOS: true
cgroupDriver: systemd
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
memoryManagerPolicy: None
topologyManagerPolicy: none
topologyManagerScope: container
runtimeRequestTimeout: 2m0s
hairpinMode: promiscuous-bridge
maxPods: 110
podPidsLimit: -1
resolvConf: /etc/resolv.conf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
nodeStatusMaxImages: 50
maxOpenFiles: 1000000
contentType: application/vnd.kubernetes.protobuf
kubeAPIQPS: 50
kubeAPIBurst: 100
serializeImagePulls: true
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
imagefs.inodesFree: 5%
evictionPressureTransitionPeriod: 1m0s
mergeDefaultEvictionSettings: false
enableControllerAttachDetach: true
makeIPTablesUtilChains: true
iptablesMasqueradeBit: 14
iptablesDropBit: 15
featureGates:
AllAlpha: false
failSwapOn: false
memorySwap: {}
containerLogMaxSize: 10Mi
containerLogMaxFiles: 5
configMapAndSecretChangeDetectionStrategy: Watch
enforceNodeAllocatable:
- pods
volumePluginDir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
logging:
format: text
flushFrequency: 5s
verbosity: 3
options:
json:
infoBufferSize: 0
enableSystemLogHandler: true
enableSystemLogQuery: false
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
enableProfilingHandler: true
enableDebugFlagsHandler: true
seccompDefault: false
memoryThrottlingFactor: 0.9
registerNode: true
localStorageCapacityIsolation: true
containerRuntimeEndpoint: unix:///run/cri-dockerd.sock
EOF

8.2.3当使用Containerd作为Runtime

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
cat > /etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
enableServer: true
staticPodPath: /etc/kubernetes/manifests
syncFrequency: 1m0s
fileCheckFrequency: 20s
httpCheckFrequency: 20s
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
rotateCertificates: true
authentication:
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
webhook:
enabled: true
cacheTTL: 2m0s
anonymous:
enabled: true
authorization:
mode: AlwaysAllow
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
registryPullQPS: 5
registryBurst: 10
eventRecordQPS: 50
eventBurst: 100
enableDebuggingHandlers: true
healthzPort: 10248
healthzBindAddress: 127.0.0.1
oomScoreAdj: -999
clusterDomain: cluster.local
clusterDNS:
- 10.0.0.10
streamingConnectionIdleTimeout: 4h0m0s
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 5m0s
nodeLeaseDurationSeconds: 40
imageMinimumGCAge: 2m0s
imageMaximumGCAge: 0s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m0s
cgroupsPerQOS: true
cgroupDriver: systemd
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
memoryManagerPolicy: None
topologyManagerPolicy: none
topologyManagerScope: container
runtimeRequestTimeout: 2m0s
hairpinMode: promiscuous-bridge
maxPods: 110
podPidsLimit: -1
resolvConf: /etc/resolv.conf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
nodeStatusMaxImages: 50
maxOpenFiles: 1000000
contentType: application/vnd.kubernetes.protobuf
kubeAPIQPS: 50
kubeAPIBurst: 100
serializeImagePulls: true
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
imagefs.inodesFree: 5%
evictionPressureTransitionPeriod: 1m0s
mergeDefaultEvictionSettings: false
enableControllerAttachDetach: true
makeIPTablesUtilChains: true
iptablesMasqueradeBit: 14
iptablesDropBit: 15
featureGates:
AllAlpha: false
failSwapOn: false
memorySwap: {}
containerLogMaxSize: 10Mi
containerLogMaxFiles: 5
configMapAndSecretChangeDetectionStrategy: Watch
enforceNodeAllocatable:
- pods
volumePluginDir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
logging:
format: text
flushFrequency: 5s
verbosity: 3
options:
json:
infoBufferSize: 0
enableSystemLogHandler: true
enableSystemLogQuery: false
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
enableProfilingHandler: true
enableDebugFlagsHandler: true
seccompDefault: false
memoryThrottlingFactor: 0.9
registerNode: true
localStorageCapacityIsolation: true
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
EOF

8.2.4启动kubelet

1
2
3
4
systemctl daemon-reload
systemctl enable --now kubelet.service
systemctl restart kubelet.service
systemctl status kubelet.service

8.2.5查看集群

1
2
3
4
5
6
7
8
[root@k8s-master01 cby]# kubectl  get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady <none> 9s v1.36.0
k8s-master02 NotReady <none> 7s v1.36.0
k8s-master03 NotReady <none> 6s v1.36.0
k8s-node01 NotReady <none> 4s v1.36.0
k8s-node02 NotReady <none> 4s v1.36.0
[root@k8s-master01 cby]#

8.2.6查看容器运行时

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@k8s-master01 cby]# kubectl describe node | grep Runtime
Container Runtime Version: containerd://2.2.3
Container Runtime Version: containerd://2.2.3
Container Runtime Version: containerd://2.2.3
Container Runtime Version: containerd://2.2.3
Container Runtime Version: containerd://2.2.3
[root@k8s-master01 cby]#
[root@k8s-master01 ~]# kubectl describe node | grep Runtime
Container Runtime Version: docker://29.4.3
Container Runtime Version: docker://29.4.3
Container Runtime Version: docker://29.4.3
Container Runtime Version: docker://29.4.3
Container Runtime Version: docker://29.4.3
[root@k8s-master01 ~]#

8.3.kube-proxy配置

8.3.1将kubeconfig发送至其他节点

1
2
# master-1执行
for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done

8.3.2所有k8s节点添加kube-proxy的service文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
cat >  /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/etc/kubernetes/kube-proxy.yaml \\
--cluster-cidr=172.16.0.0/16,fd00:2222::/112 \\
--v=2
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF


# 关闭IPv6
# 删除 --cluster-cidr 的IPv6地址
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/etc/kubernetes/kube-proxy.yaml \\
--cluster-cidr=172.16.0.0/16 \\
--v=2
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF

# 这是一个 systemd 服务单元文件的示例,用于配置 Kubernetes Kube Proxy 服务。下面是对其中一些字段的详细解释:
#
# [Unit]
#
# Description: 描述了该服务单元的用途,这里是 Kubernetes Kube Proxy。
# Documentation: 指定了该服务单元的文档地址,即 https://github.com/kubernetes/kubernetes。
# After: 指定该服务单元应在 network.target(网络目标)之后启动。
# [Service]
#
# ExecStart: 指定了启动 Kube Proxy 服务的命令。通过 /usr/local/bin/kube-proxy 命令启动,并指定了配置文件的路径为 /etc/kubernetes/kube-proxy.yaml,同时指定了日志级别为 2。
# Restart: 配置了服务在失败或退出后自动重启。
# RestartSec: 配置了重启间隔,这里是每次重启之间的等待时间为 10 秒。
# [Install]
#
# WantedBy: 指定了该服务单元的安装目标为 multi-user.target(多用户目标),表示该服务将在多用户模式下启动。
# 通过配置这些字段,你可以启动和管理 Kubernetes Kube Proxy 服务。请注意,你需要根据实际情况修改 ExecStart 中的路径和文件名,确保与你的环境一致。另外,可以根据需求修改其他字段的值,以满足你的特定要求。

8.3.3所有k8s节点添加kube-proxy的配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
cat > /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 172.16.0.0/16,fd00:2222::/112
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
EOF

8.3.4启动kube-proxy

1
2
3
4
 systemctl daemon-reload
systemctl enable --now kube-proxy.service
systemctl restart kube-proxy.service
systemctl status kube-proxy.service

9.安装网络插件

注意 9.1 和 9.2 二选其一即可,建议在此处创建好快照后在进行操作,后续出问题可以回滚

** centos7 要升级libseccomp 不然 无法安装网络插件**

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# https://github.com/opencontainers/runc/releases
# 升级runc
# wget https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64

install -m 755 runc.amd64 /usr/local/sbin/runc
cp -p /usr/local/sbin/runc /usr/local/bin/runc
cp -p /usr/local/sbin/runc /usr/bin/runc

#查看当前版本
[root@k8s-master-1 ~]# rpm -qa | grep libseccomp
libseccomp-2.5.2-2.el9.x86_64

#下载高于2.4以上的包
# yum -y install http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm
# 清华源
# yum -y install https://mirrors.tuna.tsinghua.edu.cn/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm

9.1安装Calico

9.1.1更改calico网段

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# 查看版本
https://github.com/projectcalico/calico/tags

# 安装operator
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/manifests/tigera-operator.yaml

# 下载配置文件
curl https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/manifests/custom-resources.yaml -O

# 修改地址池
vim custom-resources.yaml
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
ipPools:
- name: default-ipv4-ippool
blockSize: 26
cidr: 172.16.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()

# 修改地址池
vim custom-resources.yaml
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
ipPools:
- name: ipv4-ippool
cidr: 172.16.0.0/16
blockSize: 26
# encapsulation: IPIP
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
- name: ipv6-ippool
cidr: "fd00:2222::/112"
blockSize: 122
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
nodeAddressAutodetectionV4:
interface: "eth.*|en.*"
nodeAddressAutodetectionV6:
interface: "eth.*|en.*"


# 打开vxlan内核 适用于大多数使用 systemd 的发行版
echo "vxlan" | sudo tee /etc/modules-load.d/vxlan.conf
systemctl restart systemd-modules-load.service
lsmod | grep vxlan

# 执行安装
kubectl create -f custom-resources.yaml

# 安装客户端
curl -L https://github.com/projectcalico/calico/releases/download/v3.32.0/calicoctl-linux-amd64 -o calicoctl

# 给客户端添加执行权限
chmod +x ./calicoctl

# 查看集群节点
./calicoctl get nodes --allow-version-mismatch
# 查看集群节点状态
./calicoctl node status --allow-version-mismatch
#查看地址池
./calicoctl get ipPool --allow-version-mismatch
./calicoctl get ipPool --allow-version-mismatch -o yaml

9.1.2查看容器状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64

# calico 初始化会很慢 需要耐心等待一下,大约十分钟左右
[root@k8s-master01 kubernetes-v1.36.0]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-system calico-apiserver-7bb46cd974-2tb62 1/1 Running 0 7m46s
calico-system calico-apiserver-7bb46cd974-92p96 1/1 Running 0 7m46s
calico-system calico-kube-controllers-7c4f878bd8-xptks 1/1 Running 0 7m45s
calico-system calico-node-6wbsv 1/1 Running 0 7m46s
calico-system calico-node-djq59 1/1 Running 0 7m46s
calico-system calico-node-dm97b 1/1 Running 0 7m46s
calico-system calico-node-lvq6w 1/1 Running 0 7m46s
calico-system calico-node-pmq6v 1/1 Running 0 7m46s
calico-system calico-typha-758c8bf6f7-8tkss 1/1 Running 0 7m43s
calico-system calico-typha-758c8bf6f7-dqsqq 1/1 Running 0 7m46s
calico-system calico-typha-758c8bf6f7-h7569 1/1 Running 0 7m43s
calico-system csi-node-driver-4rld6 2/2 Running 0 7m45s
calico-system csi-node-driver-8krh7 2/2 Running 0 7m45s
calico-system csi-node-driver-bvq9q 2/2 Running 0 7m45s
calico-system csi-node-driver-qcb9d 2/2 Running 0 7m45s
calico-system csi-node-driver-xkkcj 2/2 Running 0 7m45s
calico-system goldmane-6885dcb7d-k26sd 1/1 Running 0 7m46s
calico-system whisker-898cf7b47-75pdh 2/2 Running 0 6m53s
tigera-operator tigera-operator-85dbff4478-sntj6 1/1 Running 0 9m13s
[root@k8s-master01 kubernetes-v1.36.0]#

# IPIP模式 仅支持IPv4,不支持IPv6,有tun网口
[root@k8s-master01 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 100 0 0 ens32
172.17.125.0 192.168.1.34 255.255.255.192 UG 0 0 0 tunl0
172.18.195.0 192.168.1.33 255.255.255.192 UG 0 0 0 tunl0
172.25.92.64 192.168.1.32 255.255.255.192 UG 0 0 0 tunl0
172.25.244.192 0.0.0.0 255.255.255.192 U 0 0 0 *
172.25.244.193 0.0.0.0 255.255.255.255 UH 0 0 0 calif3e9d7544a4
172.27.14.192 192.168.1.35 255.255.255.192 UG 0 0 0 tunl0
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 ens32
[root@k8s-master01 ~]#

# VXLANCrossSubnet模式,无tun网口
[root@k8s-master01 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 100 0 0 ens32
172.16.32.128 0.0.0.0 255.255.255.192 U 0 0 0 *
172.16.32.129 0.0.0.0 255.255.255.255 UH 1024 0 0 calic9ff772e94d
172.16.58.192 192.168.1.35 255.255.255.192 UG 0 0 0 ens32
172.16.85.192 192.168.1.34 255.255.255.192 UG 0 0 0 ens32
172.16.122.128 192.168.1.32 255.255.255.192 UG 0 0 0 ens32
172.16.195.0 192.168.1.33 255.255.255.192 UG 0 0 0 ens32
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 ens32
[root@k8s-master01 ~]#

# 路由表
[root@k8s-master01 ~]# route -n -4 -6 | grep vxlan
fe80::/64 :: U 256 1 0 vxlan.calico
fd00:100::88bd:e5c3:433c:2080/128 :: Un 0 2 0 vxlan-v6.calico
fe80::/128 :: Un 0 3 0 vxlan.calico
fe80::6454:d1ff:fe67:d36d/128 :: Un 0 2 0 vxlan.calico
ff00::/8 :: U 256 1 0 vxlan.calico
ff00::/8 :: U 256 1 0 vxlan-v6.calico
[root@k8s-master01 ~]#

9.1.3删除

1
2
3
4
5
6
7
8
9
10
11
12
kubectl delete -f https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/manifests/tigera-operator.yaml  --force --grace-period=0
kubectl delete -f https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/manifests/custom-resources.yaml --force --grace-period=0
# 在所有主机上执行
modprobe -r ipip # 删除 IPIP 模式虚拟网卡
modprobe -r vxlan # 删除 VXLAN 模式虚拟网卡
# 在所有主机上执行
sudo rm -rf /etc/cni/net.d/*calico*
sudo rm -f /opt/cni/bin/calico*
sudo rm -f /usr/local/bin/calico*
sudo rm -rf /var/lib/cni/networks/calico/
# 在所有主机上执行
systemctl restart kubelet

9.2 安装cilium

9.2.1 安装helm

1
2
3
4
5
6
7
# [root@k8s-master01 ~]# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
# [root@k8s-master01 ~]# chmod 700 get_helm.sh
# [root@k8s-master01 ~]# ./get_helm.sh

# wget https://get.helm.sh/helm-v4.0.4-linux-amd64.tar.gz
tar xvf helm-*-linux-amd64.tar.gz
cp linux-amd64/helm /usr/local/bin/

9.2.2 安装cilium

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 添加源
helm repo add cilium https://helm.cilium.io

# 修改为国内源
helm pull cilium/cilium
tar xvf cilium-*.tgz
cd cilium/

# sed -i "s#quay.io/#quay.m.daocloud.io/#g" values.yaml

# 默认参数安装
helm install cilium ./cilium/ -n kube-system

helm install cilium cilium/cilium --version 1.19.3 -n kube-system

# 启用ipv6
# helm install cilium cilium/cilium -n kube-system --set ipv6.enabled=true

# 启用路由信息和监控插件
# helm install cilium ./cilium/ --namespace kube-system --set ipv6.enabled=true --set hubble.relay.enabled=true --set hubble.ui.enabled=true --set prometheus.enabled=true --set operator.prometheus.enabled=true --set hubble.enabled=true --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}"

9.2.3 查看

1
2
3
4
5
6
7
8
9
10
[root@k8s-master01 ~]# kubectl  get pod -A | grep cil
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-2tnfb 1/1 Running 0 60s
kube-system cilium-5tgcb 1/1 Running 0 60s
kube-system cilium-6shf5 1/1 Running 0 60s
kube-system cilium-ccbcx 1/1 Running 0 60s
kube-system cilium-cppft 1/1 Running 0 60s
kube-system cilium-operator-675f685d59-7q27q 1/1 Running 0 60s
kube-system cilium-operator-675f685d59-kwmqz 1/1 Running 0 60s
[root@k8s-master01 ~]#

9.2.4 下载专属监控面板

安装时候没有创建 监控可以忽略

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/1.19.3/examples/kubernetes/addons/prometheus/monitoring-example.yaml

[root@k8s-master01 yaml]# sed -i "s#docker.io/#jockerhub.com/#g" monitoring-example.yaml

[root@k8s-master01 yaml]# kubectl apply -f monitoring-example.yaml
namespace/cilium-monitoring created
serviceaccount/prometheus-k8s created
configmap/grafana-config created
configmap/grafana-cilium-dashboard created
configmap/grafana-cilium-operator-dashboard created
configmap/grafana-hubble-dashboard created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/grafana created
service/prometheus created
deployment.apps/grafana created
deployment.apps/prometheus created
[root@k8s-master01 yaml]#

9.2.5 修改为NodePort

安装时候没有创建 监控可以忽略

1
2
3
4
5
6
7
8
9
10
11
[root@k8s-master01 yaml]# kubectl  edit svc  -n kube-system hubble-ui
service/hubble-ui edited
[root@k8s-master01 yaml]#
[root@k8s-master01 yaml]# kubectl edit svc -n cilium-monitoring grafana
service/grafana edited
[root@k8s-master01 yaml]#
[root@k8s-master01 yaml]# kubectl edit svc -n cilium-monitoring prometheus
service/prometheus edited
[root@k8s-master01 yaml]#

type: NodePort

9.2.6 查看端口

安装时候没有创建 监控可以忽略

1
2
3
4
[root@k8s-master01 yaml]# kubectl get svc -A | grep NodePort
cilium-monitoring grafana NodePort 10.111.74.3 <none> 3000:32648/TCP 74s
cilium-monitoring prometheus NodePort 10.107.240.124 <none> 9090:30495/TCP 74s
kube-system hubble-ui NodePort 10.96.185.26 <none> 80:31568/TCP 99s

9.2.7 访问

安装时候没有创建 监控可以忽略

1
2
3
http://192.168.1.31:32648
http://192.168.1.31:30495
http://192.168.1.31:31568

10.安装CoreDNS

10.1以下步骤只在master01操作

10.1.1修改文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# 下载tgz包
helm repo add coredns https://coredns.github.io/helm
helm pull coredns/coredns
tar xvf coredns-*.tgz
cd coredns/

# 修改IP地址
vim values.yaml
cat values.yaml | grep clusterIP:
clusterIP: "10.96.0.10"

# 示例
---
service:
# clusterIP: ""
# clusterIPs: []
# loadBalancerIP: ""
# externalIPs: []
# externalTrafficPolicy: ""
# ipFamilyPolicy: ""
# The name of the Service
# If not set, a name is generated using the fullname template
clusterIP: "10.96.0.10"
name: ""
annotations: {}
---

# 修改为国内源
sed -i "s#registry.k8s.io/#k8s.m.daocloud.io/#g" values.yaml

# 默认参数安装
helm install coredns ./coredns/ -n kube-system

11.安装Metrics Server

11.1以下步骤只在master01操作

11.1.1安装Metrics-server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# 下载 
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server.yaml

# 修改配置
vim metrics-server.yaml

---
# 1
- args:
- --cert-dir=/tmp
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem
- --requestheader-username-headers=X-Remote-User
- --requestheader-group-headers=X-Remote-Group
- --requestheader-extra-headers-prefix=X-Remote-Extra-

# 2
volumeMounts:
- mountPath: /tmp
name: tmp-dir
- name: ca-ssl
mountPath: /etc/kubernetes/pki

# 3
volumes:
- emptyDir: {}
name: tmp-dir
- name: ca-ssl
hostPath:
path: /etc/kubernetes/pki
---


# 修改为国内源 docker源可选
sed -i "s#registry.k8s.io/#k8s.m.daocloud.io/#g" metrics-server.yaml

# 执行部署
kubectl apply -f metrics-server.yaml

11.1.2稍等片刻查看状态

1
2
3
4
5
6
7
kubectl  top node
NAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%)
k8s-master01 182m 2% 2059Mi 58%
k8s-master02 104m 1% 1577Mi 44%
k8s-master03 113m 1% 1516Mi 42%
k8s-node01 53m 0% 915Mi 25%
k8s-node02 66m 0% 930Mi 26%

12.集群验证

12.1部署pod资源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: docker.m.daocloud.io/library/busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
EOF

# 查看
kubectl get pod
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 17s

12.2用pod解析默认命名空间中的kubernetes

1
2
3
4
5
6
7
8
9
10
11
12
# 查看name
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h

# 进行解析
kubectl exec busybox -n default -- nslookup kubernetes
3Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

12.3测试跨命名空间是否可以解析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 查看有那些name
kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 76m
kube-system calico-typha ClusterIP 10.105.100.82 <none> 5473/TCP 35m
kube-system coredns-coredns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 8m14s
kube-system metrics-server ClusterIP 10.105.60.31 <none> 443/TCP 109s

# 进行解析
kubectl exec busybox -n default -- nslookup coredns.kube-system
Server: 10.96.0.10
Address 1: 10.96.0.10 coredns-coredns.kube-system.svc.cluster.local

Name: coredns-coredns.kube-system
Address 1: 10.96.0.10 coredns-coredns.kube-system.svc.cluster.local
[root@k8s-master01 metrics-server]#

12.4每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53

1
2
3
4
5
6
7
8
9
telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.

telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'.

12.5Pod和Pod之前要能通

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 17m 172.27.14.193 k8s-node02 <none> <none>

kubectl get po -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-76754ff848-pw4xg 1/1 Running 0 38m 172.25.244.193 k8s-master01 <none> <none>
calico-node-97m55 1/1 Running 0 38m 192.168.1.34 k8s-node01 <none> <none>
calico-node-hlz7j 1/1 Running 0 38m 192.168.1.32 k8s-master02 <none> <none>
calico-node-jtlck 1/1 Running 0 38m 192.168.1.33 k8s-master03 <none> <none>
calico-node-lxfkf 1/1 Running 0 38m 192.168.1.35 k8s-node02 <none> <none>
calico-node-t667x 1/1 Running 0 38m 192.168.1.31 k8s-master01 <none> <none>
calico-typha-59d75c5dd4-gbhfp 1/1 Running 0 38m 192.168.1.35 k8s-node02 <none> <none>
coredns-coredns-c5c6d4d9b-bd829 1/1 Running 0 10m 172.25.92.65 k8s-master02 <none> <none>
metrics-server-7c8b55c754-w7q8v 1/1 Running 0 3m56s 172.17.125.3 k8s-node01 <none> <none>

# 进入busybox ping其他节点上的pod

kubectl exec -ti busybox -- sh
/ # ping 192.168.1.34
PING 192.168.1.34 (192.168.1.34): 56 data bytes
64 bytes from 192.168.1.34: seq=0 ttl=63 time=0.358 ms
64 bytes from 192.168.1.34: seq=1 ttl=63 time=0.668 ms
64 bytes from 192.168.1.34: seq=2 ttl=63 time=0.637 ms
64 bytes from 192.168.1.34: seq=3 ttl=63 time=0.624 ms
64 bytes from 192.168.1.34: seq=4 ttl=63 time=0.907 ms

# 可以连通证明这个pod是可以跨命名空间和跨主机通信的

12.6创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
cat<<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
EOF

kubectl get pod
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 6m25s
nginx-deployment-9456bbbf9-4bmvk 1/1 Running 0 8s
nginx-deployment-9456bbbf9-9rcdk 1/1 Running 0 8s
nginx-deployment-9456bbbf9-dqv8s 1/1 Running 0 8s

# 删除nginx
[root@k8s-master01 ~]# kubectl delete deployments nginx-deployment

13.安装 Headlamp

1
2
helm repo add headlamp https://kubernetes-sigs.github.io/headlamp/
helm install my-headlamp headlamp/headlamp --namespace kube-system

13.1更改 Headlamp 的svc为NodePort

1
2
3
4
5
6
[root@k8s-master01 ~]# kubectl edit svc  -n kube-system my-headlamp
type: NodePort

[root@k8s-master01 ~]# kubectl get svc -A | grep headlamp
kube-system my-headlamp NodePort 10.96.242.100 <none> 80:31715/TCP 20s
[root@k8s-master01 ~]#

13.2创建 token

1
2
3
4
5
kubectl create token my-headlamp --namespace kube-system

eyJhbGciOiJSUzI1NiIsImtpZCI6ImU1Z2RQTWFxRTh0aTNJRFI0c0pSUXM0bkVyOEp5V2tZRXpPdGw4TnBaQnMifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzc4NDAxNjg4LCJpYXQiOjE3NzgzOTgwODgsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiZmY0Y2VmMzUtYzRlMi00NTVmLTgyYzctYWFmMTAwODliNGNkIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJteS1oZWFkbGFtcCIsInVpZCI6IjQzMjhjODY5LTQ4ODYtNDIzMS05MjFkLWNkMWNmYjhkMTNmYSJ9fSwibmJmIjoxNzc4Mzk4MDg4LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06bXktaGVhZGxhbXAifQ.jxhOecRIUL9dNrnImdZODmuSg1vpgQV4Cuup3vJe5fwmhs2QzdRd7jQ9gd-n430QWakhp9hZBt68oRaiJMxiASDsXZz8VNi_KIMoWn9AnQ_at7aoqLtZia2l0Q82PcUEOq6gB3kSH9saIEgmDwiinRv2KDpxYPZ3myAWoUXU-aUHNG2qlx0I87TkaIkqxVmvVk_AZYipuNrAZfoJDU3-A9l8e15w1IqOR84AH2svJNp9R2D5KZaRFRu3a7Z3ikFukRGx54dJdpgWlTYslRqNDtG9k5VoJPMFvnYRCI4NDuJgyanPx3aC1EqLSJ9aayb2OdcrrZM8yAdnF_z_i1993w


13.5登录 Headlamp

http://192.168.1.31:31715/

14.部署metallb

14.1 下载部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 下载应用包
wget https://raw.githubusercontent.com/metallb/metallb/v0.15.3/config/manifests/metallb-native.yaml

# 修改镜像地址
# 自行找代理
sed -i "s#quay.io#quay.chenby.cn#g" metallb-native.yaml
cat metallb-native.yaml | grep image
image: quay.chenby.cn/metallb/controller:v0.14.5
image: quay.chenby.cn/metallb/speaker:v0.14.5

# 执行部署
kubectl apply -f metallb-native.yaml

# 可以连接国际网络
# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.3/config/manifests/metallb-native.yaml

14.2 查看运行情况

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@k8s-master01 yaml]# kubectl -n metallb-system get all 
NAME READY STATUS RESTARTS AGE
pod/controller-6dd55858b4-g6dm2 1/1 Running 1 (11m ago) 12m
pod/speaker-2z5hf 1/1 Running 0 12m
pod/speaker-fbq8s 1/1 Running 0 12m
pod/speaker-nmlxx 1/1 Running 0 12m
pod/speaker-sbl4d 1/1 Running 0 12m
pod/speaker-xr4bd 1/1 Running 0 12m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/metallb-webhook-service ClusterIP 10.110.219.226 <none> 443/TCP 12m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 5 5 5 5 5 kubernetes.io/os=linux 12m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 12m

NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-6dd55858b4 1 1 1 12m
[root@k8s-master01 yaml]#

14.3 配置VIP的资源池

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# 新版本metallb使用了CR(Custom Resources),这里我们通过IPAddressPool的CR,进行地址池的定义。
# 如果实例中不设置IPAddressPool选择器L2Advertisement;那么L2Advertisement默认为该实例所有的IPAddressPool相关联。

cat > metallb-config-ipaddresspool.yaml << EOF
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.71-192.168.1.75
EOF

# 进行L2关联地址池的绑定。

cat > metallb-config-L2Advertisement.yaml << EOF
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
spec:
ipAddressPools:
- first-pool
EOF

# 执行部署
kubectl apply -f metallb-config-ipaddresspool.yaml
kubectl apply -f metallb-config-L2Advertisement.yaml

15.安装 Envoy Gateway

15.1执行部署

1
2
3
# https://gateway.envoyproxy.io/docs/install/install-helm/
# 安装网关 API CRD 和 Envoy 网关
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.7.3 -n envoy-gateway-system --create-namespace

15.2创建测试应用

1
2
3
4
5
6
7
8
9
kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/v1.7.3/quickstart.yaml -n default

# 查看访问地址
[root@k8s-master01 ~]# kubectl get svc -n envoy-gateway-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
envoy-default-eg-e41e7b31 LoadBalancer 10.100.111.66 192.168.1.71 80:30208/TCP 3m16s
envoy-gateway ClusterIP 10.103.128.149 <none> 18000/TCP,18001/TCP,18002/TCP,19001/TCP,9443/TCP 5m19s
[root@k8s-master01 ~]#

15.3测试访问

1
curl --verbose --header "Host: www.example.com" http://192.168.1.71/get

16.IPv6测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
#部署应用

cat<<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: chenby
spec:
replicas: 1
selector:
matchLabels:
app: chenby
template:
metadata:
labels:
app: chenby
spec:
hostNetwork: true
containers:
- name: chenby
image: docker.io/library/nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: chenby
spec:
ipFamilyPolicy: PreferDualStack
ipFamilies:
- IPv6
- IPv4
type: NodePort
selector:
app: chenby
ports:
- port: 80
targetPort: 80
EOF


#查看端口
[root@k8s-master01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
chenby NodePort fd00:1111::4c64 <none> 80:30923/TCP 5s

[root@k8s-master01 ~]#

# 直接访问POD地址
[root@k8s-master01 ~]# curl -I http://[fd00:1111::4c64]
HTTP/1.1 200 OK
Server: nginx/1.29.1
Date: Sun, 31 Aug 2025 04:14:38 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 13 Aug 2025 14:33:41 GMT
Connection: keep-alive
ETag: "689ca245-267"
Accept-Ranges: bytes



# 使用IPv4地址访问测试
[root@k8s-master01 ~]# curl -I http://192.168.1.31:30923
HTTP/1.1 200 OK
Server: nginx/1.29.1
Date: Sun, 31 Aug 2025 04:15:05 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 13 Aug 2025 14:33:41 GMT
Connection: keep-alive
ETag: "689ca245-267"
Accept-Ranges: bytes


# 使用主机的内网IPv6地址测试
[root@k8s-master01 ~]# curl -I http://[fc00::31]:30923
HTTP/1.1 200 OK
Server: nginx/1.29.1
Date: Sun, 31 Aug 2025 04:15:23 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 13 Aug 2025 14:33:41 GMT
Connection: keep-alive
ETag: "689ca245-267"
Accept-Ranges: bytes


# 使用主机的公网IPv6地址测试
[root@k8s-master01 ~]# curl -I http://[2409:8a10:6d0:a071:601f:ae96:9f4f:403d]:30923
HTTP/1.1 200 OK
Server: nginx/1.29.1
Date: Sun, 31 Aug 2025 04:16:16 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 13 Aug 2025 14:33:41 GMT
Connection: keep-alive
ETag: "689ca245-267"
Accept-Ranges: bytes


17.安装NFS动态存储

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# 安装nfs
# CentOS
yum install -y nfs-utils

# Ubuntu
apt install nfs-kernel-server nfs-common

# 创建共享
mkdir /nfs

# 编辑共享
vim /etc/exports
/nfs *(rw,sync,no_root_squash,no_subtree_check)


# centos
systemctl enable rpcbind.service
systemctl enable nfs-server.service
systemctl restart rpcbind.service
systemctl restart nfs-server.service

# ubuntu
systemctl enable nfs-kernel-server
systemctl restart nfs-kernel-server

# 加载配置
exportfs -a
exportfs -r
exportfs


# 添加仓库
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
# --set nfs.server 填写你的nfs服务器地址
# --set nfs.path 填写你的nfs服务器路径
helm install -n kube-system nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set storageClass.defaultClass=true \
--set nfs.server=192.168.1.31 \
--set nfs.path=/nfs

# 查看是否为默认存储
[root@k8s-master01 ~]# kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 123m
[root@k8s-master01 ~]#

17.1 测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# 创建PVC需求
cat >> pvc.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
EOF
# 执行部署
kubectl apply -f pvc.yaml

[root@k8s-master01 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
nginx-pvc Bound pvc-f448b2b0-9778-4220-a4f8-97a3dc8b1d3a 200Mi RWX nfs-client <unset> 33s
[root@k8s-master01 ~]#

[root@k8s-master01 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-f448b2b0-9778-4220-a4f8-97a3dc8b1d3a 200Mi RWX Delete Bound default/nginx-pvc nfs-client <unset> 23s
[root@k8s-master01 ~]#

18.污点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# 查看当前污点状态
[root@k8s-master01 ~]# kubectl describe node | grep Taints
Taints: <none>
Taints: <none>
Taints: <none>
Taints: <none>
Taints: <none>

# 设置污点 禁止调度 同时进行驱赶现有的POD
kubectl taint nodes k8s-master01 key1=value1:NoExecute
kubectl taint nodes k8s-master02 key1=value1:NoExecute
kubectl taint nodes k8s-master03 key1=value1:NoExecute

# 取消污点
kubectl taint nodes k8s-master01 key1=value1:NoExecute-
kubectl taint nodes k8s-master02 key1=value1:NoExecute-
kubectl taint nodes k8s-master03 key1=value1:NoExecute-

# 设置污点 禁止调度 不进行驱赶现有的POD
kubectl taint nodes k8s-master01 key1=value1:NoSchedule
kubectl taint nodes k8s-master02 key1=value1:NoSchedule
kubectl taint nodes k8s-master03 key1=value1:NoSchedule

# 取消污点
kubectl taint nodes k8s-master01 key1=value1:NoSchedule-
kubectl taint nodes k8s-master02 key1=value1:NoSchedule-
kubectl taint nodes k8s-master03 key1=value1:NoSchedule-

19.安装命令行自动补全功能

1
2
3
4
yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

附录

1
详见完整版 https://github.com/cby-chen/Kubernetes

关于

https://www.oiox.cn/

https://www.oiox.cn/index.php/start-page.html

CSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客

全网可搜《小陈运维》

文章主要发布于微信公众号:《小陈运维》