kubernetes1.24从构建到躺平「一」

三台设备:

1台master节点

2台slave节点

OS: Ubuntu Server 22.04

一: 前期环境准备

1) 所有节点安装 apt-transport-https

root@srv1:~# apt install apt-transport-https ca-certificates curl gnupg lsb-release -y

2) 开启所有节点的bridge-nf-call-ip6tables(允许bridge的Netfilter复用IP层的Netfilter代码)

root@srv1:~# echo “br_netfilter” > /etc/modules-load.d/k8s.confroot@srv1:~# vim /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward=1root@srv1:~# sysctl –system

3) 取消并注释fstab中的swap

root@srv1:~# vim /etc/fstab……#/dev/disk/by-uuid/aa1f65c9-2728-4763-9f2b-d6d0bc1ee92e none swap sw 0 0 ……#/swap.img none swap sw 0 0root@srv1:~# swapoff -a

4) 在所有节点上安装Kubeadm, Kubelet, Kubectl

root@srv1:~# curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | gpg –no-default-keyring –keyring gnupg-ring:/etc/apt/trusted.gpg.d/apt-key.gpg –importroot@srv1:~# chmod 644 /etc/apt/trusted.gpg.d/apt-key.gpgroot@srv1:~# echo “deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main” >> /etc/apt/sources.list.d/kubernetes.listroot@srv1:~# apt update ; apt install kubeadm kubelet kubectl -y

5) 所有节点安装containerd.io

root@srv1:~# curl -s https://download.docker.com/linux/ubuntu/gpg | gpg –no-default-keyring –keyring gnupg-ring:/etc/apt/trusted.gpg.d/apt-key.gpg –importroot@srv1:~# echo “deb https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable” >> /etc/apt/sources.list.d/docker.listroot@srv1:~# apt update ; apt install containerd.io -y

6) 所有节点配置containerd.io

root@srv1:~# containerd config default > /etc/containerd/config.tomlroot@srv1:~# sed -i ‘s/SystemdCgroup = false/SystemdCgroup = true/’ /etc/containerd/config.tomlroot@srv1:~# sed -i ‘s#endpoint = “”#endpoint = “https://3laho3y3.mirror.aliyuncs.com”#g’ /etc/containerd/config.tomlroot@srv1:~# sed -i ‘s#sandbox_image = “k8s.gcr.io/pause#sandbox_image = “registry.aliyuncs.com/google_containers/pause#g’ /etc/containerd/config.tomlroot@srv1:~# systemctl daemon-reload && systemctl restart containerd.service && reboot

二: 配置Master节点

1) 初始化k8s并指定API Server及pod所用的网络ID

root@srv1:~# kubeadm init –apiserver-advertise-address=192.168.1.11 –pod-network-cidr=10.244.0.0/16 –image-repository registry.aliyuncs.com/google_containers[init] Using Kubernetes version: v1.24.2[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’………………………………Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.1.11:6443 –token vkemxu.1zkcqx00umaech8i –discovery-token-ca-cert-hash sha256:8c83889acbef5a54b410e8d2513b6eca01ee7eef1244737bacec81168fc5d553

2) 根据提示设置kubeadm的相关环境

root@srv1:~# echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> .bashrcroot@srv1:~# export KUBECONFIG=/etc/kubernetes/admin.confroot@srv1:~# crictl imagesIMAGE TAG IMAGE ID SIZEregistry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7a 13.6MBregistry.aliyuncs.com/google_containers/etcd 3.5.3-0 aebe758cef4cd 102MBregistry.aliyuncs.com/google_containers/kube-apiserver v1.24.2 d3377ffb7177c 33.8MBregistry.aliyuncs.com/google_containers/kube-controller-manager v1.24.2 34cdf99b1bb3b 31MBregistry.aliyuncs.com/google_containers/kube-proxy v1.24.2 a634548d10b03 39.5MBregistry.aliyuncs.com/google_containers/kube-scheduler v1.24.2 5d725196c1f47 15.5MBregistry.aliyuncs.com/google_containers/pause 3.7 221177c6082a8 311k

3) 配置Pod Flannel Network

# 因网络问题,需先下载flannel镜像。具备版本需求请参看kube-flannel.yml文件root@srv1:~# crictl pull rancher/mirrored-flannelcni-flannel:v0.18.1root@srv1:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlWarning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+podsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds created

4) 查看Master节点状态

root@srv1:~# kubectl get nodesNAME STATUS ROLES AGE VERSIONsrv1.1000y.cloud Ready control-plane 9m59s v1.24.2

5) 确认Master节点所有的namespaces

root@srv1:~# kubectl get pods –all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-74586cf9b6-bnx4v 1/1 Running 0 9m52skube-system coredns-74586cf9b6-qdk4w 1/1 Running 0 9m52skube-system etcd-srv1.1000y.cloud 1/1 Running 0 10mkube-system kube-apiserver-srv1.1000y.cloud 1/1 Running 0 10mkube-system kube-controller-manager-srv1.1000y.cloud 1/1 Running 0 10mkube-system kube-flannel-ds-qbkzk 1/1 Running 0 2m22skube-system kube-proxy-tm2l6 1/1 Running 0 9m52skube-system kube-scheduler-srv1.1000y.cloud 1/1 Running 0 10m

三: 配置Slave节点

1) 将Worker节点加入至k8s cluster中

root@srv2:~# kubeadm join 192.168.1.11:6443 –token vkemxu.1zkcqx00umaech8i –discovery-token-ca-cert-hash sha256:8c83889acbef5a54b410e8d2513b6eca01ee7eef1244737bacec81168fc5d553[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster…[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml'[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run ‘kubectl get nodes’ on the control-plane to see this node join the clusterroot@srv3:~# kubeadm join 192.168.1.11:6443 –token vkemxu.1zkcqx00umaech8i –discovery-token-ca-cert-hash sha256:8c83889acbef5a54b410e8d2513b6eca01ee7eef1244737bacec81168fc5d553[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster…[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml'[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

2) 集群验证

root@srv1:~# kubectl get nodesNAME STATUS ROLES AGE VERSIONsrv1.1000y.cloud Ready control-plane 41m v1.24.2srv2.1000y.cloud Ready 3m48s v1.24.2srv3.1000y.cloud Ready 3m51s v1.24.2

郑重声明:本文内容及图片均整理自互联网,不代表本站立场,版权归原作者所有,如有侵权请联系管理员(admin#wlmqw.com)删除。
(0)
用户投稿
上一篇 2022年6月29日
下一篇 2022年6月29日

相关推荐

联系我们

联系邮箱:admin#wlmqw.com
工作时间:周一至周五,10:30-18:30,节假日休息