布置步骤

禁用swap分区,运用以下指令,亦能够手动注释/etc/fstab下的swap分区

swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

开放端口

on master server

$ firewall-cmd --add-port={6443,2379-2380,10250,10251,10252,5473,179,5473}/tcp --permanent
$ firewall-cmd --add-port={4789,8285,8472}/udp --permanent
$ firewall-cmd --reload

Bash
on worker server

$ firewall-cmd --add-port={10250,30000-32767,5473,179,5473}/tcp --permanent
$ firewall-cmd --add-port={4789,8285,8472}/udp --permanent
$ firewall-cmd --reload

装置kubeadm、kubectl、kubelet等东西

在ubuntu或许debian体系中

如果直接按照官网的步骤来布置,可能因为梯子的域名解析或许不稳定,导致key下载不下来,apt update同步失利

sudo mkdir /etc/apt/keyrings
sudo curl -fsSLo /etc/apt/keyrings/apt-key.gpg https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg
echo "deb [signed-by=/etc/apt/keyrings/apt-key.gpg] https://apt.kubernetes.io/ kubernetes-xenial main"| sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubelet=1.26.3-00 kubeadm=1.26.3-00 kubectl=1.26.3-00

在openEuler或许centOS体系中

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet

装备内核模块

内核模块加载装备

# Enable kernel modules
sudo modprobe overlay && sudo modprobe br_netfilter
# Add some settings to sysctl
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Reload sysctl
sudo sysctl --system

在ubuntu或许debian体系中

ipvs(可选,为kube-proxy敞开ipvs的装备)

能够直接modprobe加载模块,也能够写在装备文件中

sudo modprobe ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4

如果是装备文件,则ubuntu对应/etc/modules

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

在openEuler或许centOS体系中

ipvs(可选,为kube-proxy敞开ipvs的装备)

能够直接modprobe加载模块,也能够写在装备文件中

sudo modprobe ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4

如果是openEuler中,装备文件对应/etc/sysconfig/modules/ipvs.modules

$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

装置容器运转时(Container Runtime)

装置docker

在ubuntu或许debian体系中

装置依靠东西

sudo apt update
sudoapt-get install\
  apt-transport-https \
  ca-certificates \
  curl \
  gnupg-agent \
  software-properties-common

设置ali源

curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

装置

#更新并装置Docker-ce
sudo apt-get -y update
sudo apt install -y docker-ce
#装置docker-compose
sudo apt install -y docker-compose
#装备docker开机发动
sudo systemctl enable docker

在openEuler或许centOS体系中

装置依靠东西

sudoyum install-ydevice-mapper-persistent-data

设置ali源

sudoyum-config-manager \
 --add-repo\
  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

记得要手动修改/etc/yum.repo.d/docker-ce.repo中的$releasever字段,改成8

装置

sudo yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin

调整cgroups的驱动

装置后默许cgroups驱动运用cgroupfs ,需求调整为systemd,因而,编辑docker装备文件,履行:sudo vi /etc/docker/daemon. json,这里面也能够装备镜像源地址

$ sudo mkdir -p /etc/docker  # 如果没有这个目录先创立,然后增加 daemon.json 文件
$ sudo vi /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

重启docker,履行:

sudo systemctl daemon-reload && sudo systemctl restart docker

检查当时cgroups驱动,履行:

sudo docker info | grep -i cgroup

如果这里不调整cgroups驱动类型,后边发动kubelet会失利

初始化master节点

能够直接运用指令指定init参数

sudo kubeadm init --pod-network-cidr 172.16.0.0/16 \
--apiserver-advertise-address=192.168.56.130 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers

也能够经过装备文件来设置

然后接下来在 master 节点装备 kubeadm 初始化文件,能够经过如下指令导出默许的初始化装备:

$ kubeadm config print init-defaults > kubeadm.yaml

修改装备文件(依据每个人电脑装备不同,修改如下参数)

#1、需求替换master节点IP
advertiseAddress: 192.168.197.139   
#2、装备运用containerd
criSocket: unix:///var/run/containerd/containerd.sock  
#3、阿里的源  K8S运用的,不是容器镜像运用的。 翻墙速度能够的话这里不必其他换
imageRepository: registry.aliyuncs.com/google_containers 
#4、cgroupDriver 切换为systemd
cgroupDriver: systemd 
#5、注明版本
kubernetesVersion: 1.26.3
#6、装备cidr ip端
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16 # Pod 网段,flannel插件需求运用这个网段
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  # 姓名替换成master节点的hostname
  name: ubuntu-master 
  taints: null

接着运转初始化指令

sudo kubeadm init --config kubeadm.yaml

如果呈现古怪的报错,t “unix:///var/run/containerd/containerd.sock”: rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService”
则试着重启containerd服务

sudo rm /etc/containerd/config.toml
systemctl restart containerd

总算成功初始化

还需求做以下几个装备,否则调用kubectl相关指令会呈现refused的错误

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

增加节点

依据提示,在每个独自的节点上运转以下指令即可

kubeadm join 192.168.15.234:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:55568f7f72f6b875543ea24ebf0975e6aab91e898577ad5ad5a2cb476d63025e

装备网络CNI插件

calico

想办法下到一个calico.yaml

calico.yaml

kubectl apply -f calico.yaml

再次检查节点会发现一切节点都是READY状况

kubectl get nodes

flannel

从github.com/flannel-io/…中下载文件

然后

kubectl apply -f kube-flannel.yml

装备nginx样例测验

vim nginx-deployment.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-servie
  name: nginx-service
spec:
  ports:	# 对外露出的端口
  - nodePort: 30013
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx-pod
  type: NodePort   # NodePort类型能够对外露出端口
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-deploy
  name: nginx-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - image: nginx:latest 		# 镜像名称
        name: nginx
        ports:
        - containerPort: 80
        resources: {}
  • 创立deployment
    • kubectl apply -f nginx-deployment.yaml
      样例装备失利,检查日志
Events:
  Type     Reason                  Age                   From               Message
  ----     ------                  ----                  ----               -------
  Normal   Scheduled               56m                   default-scheduler  Successfully assigned default/nginx-deploy-54844bd945-t9vn4 to huangji-ubuntu200
4-k8s-subnode2
  Warning  FailedCreatePodSandBox  56m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup n
etwork for sandbox "4112f80b3f2de8b5417ccc76f179b52e52b78004a159d23aae10daa04b3079d8": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no 
such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Normal   SandboxChanged          100s (x256 over 56m)  kubelet            Pod sandbox changed, it will be killed and re-created.

此刻检查

测验处理,必须要一切节点kubeadm reset,然后全部删去以下文件

sudo rm -rf /var/lib/calico/ && sudo rm -rf /etc/cni/net.d/10-calico.conflist && sudo rm -rf /etc/cni/net.d/calico-kubeconfig

重新布置后,运用flannel完美处理问题

常见问题

体系组件镜像拉取失利

一般是因为k8s.io访问失利导致的,需求手动指定运用aliyun的源

sudo kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers

一起能够装备kubeadm-config来设置默许镜像源,修改其中的image途径相关字段

kubectl edit cm -n kube-system kubeadm-config

一起能够记录到init.yaml装备文件中

如果装备了config仍然未收效,则能够装备deploy,修改其中的images途径相关字段

kubectl edit deploy coredns -n kube-system

describe pod 报错 Readiness probe failed: HTTP probe failed with statuscode: 503

关闭防火墙,在一切node上履行

sudo systemctl stop firewalld