1. 程式人生 > 實用技巧 >kubernetes 學習筆記(一)之 安裝

kubernetes 學習筆記(一)之 安裝

一. 離線安裝

https://cloud.tencent.com/developer/article/1445946

https://github.com/liul85/sealos 這裡提供了離線的 kube1.16.0.tar.gz包

https://sealyun.oss-cn-beijing.aliyuncs.com/37374d999dbadb788ef0461844a70151-1.16.0/kube1.16.0.tar.gz 

https://sealyun.oss-cn-beijing.aliyuncs.com/7b6af025d4884fdd5cd51a674994359c-1.18.0/kube1.18.0.tar.gz
https://sealyun.oss-cn-beijing.aliyuncs.com/a4f6fa2b1721bc2bf6fe3172b72497f2-1.17.12/kube1.17.12.tar.gz

使用 sealos 安裝出現如下為:

1、 [root@host-10-14-69-125 kubernetes]# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding

待補充完整.................................

二. 線上安裝

1、前提條件

1) 使用作業系統 7.6 ,見 《Linux 學習筆記之(一)centos7.6 安裝

2) 準備安裝 kubernetes 的伺服器 能夠上網

3) 準備兩臺伺服器,一臺做 k8s master node,另一臺做 k8s worker node

2、準備工作

(1) 關閉防火牆

systemctl stop firewalld
systemctl disable firewalld

(2) 修改伺服器名稱並設定DNS

hostnamectl set-hostname k8s-2
//k8s-2 是 master 所屬伺服器名稱 echo
"127.0.0.1 k8s-2">>/etc/hosts

(3) 進行時間校時(用aliyun的NTP伺服器)

yum install -y ntp
ntpdate ntp1.aliyun.com

(4) 安裝常用軟體

yum install wget

3、安裝 docker

# 安裝常用軟體
yum install -y yum-utils \ device-mapper-persistent-data \ lvm2
# 新增docker yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #更新並安裝 Docker-CE
yum makecache fast yum -y install docker-ce
# 啟動 docker service docker start
# 開啟自啟動 docker systemctl enable docker
# 檢視 docker 服務狀態
systemctl status docke

4、配置 docker

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
# 使用阿里作為映象加速器
"registry-mirrors": ["https://obww7jh1.mirror.aliyuncs.com"],
# 將 docker 原先使用的驅動 cgroups 更改為 systemd,因為 kubernetes 使用的驅動為 systemd
"exec-opts": ["native.cgroupdriver=systemd"] } EOF
# 使 daemon.json 載入到系統中 systemctl daemon
-reload
# 重啟 docker systemctl restart docker
# 檢視安裝的 docker 相關資訊
docker inf

5、安裝 kubernetes

# 安裝 kubernetes yum 源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
# 檢視版本,本次安裝1.17.12-0 yum
--showduplicates list kubelet | expand
# 安裝 kubelet\kubeadm\kubectl yum install -y kubelet-1.17.12-0 kubeadm-1.17.12-0 kubectl-1.17.12-0

6、配置 kubernetes

#關閉 SELINUX
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#設定iptables
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

#讓 k8s.conf生效
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

# 關閉 SWAP
vi /etc/fstab
註釋swap分割槽
# /dev/mapper/centos-swap swap     swap    defaults        0 0
#儲存退出vi後執行
swapoff –a

#開機啟動kubelet
systemctl enable kubelet

7、初始化 k8s 叢集 master node

kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers

--pod-network-cidr :後續安裝 flannel 的前提條件,且值為 10.244.0.0/16

--image-repository :指定映象倉庫

執行以上命令,輸出的日誌為:

[root@k8s-2 opt]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers
W1013 10:38:56.543641   19539 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1013 10:38:56.543871   19539 version.go:102] falling back to the local client version: v1.17.12
W1013 10:38:56.544488   19539 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1013 10:38:56.544515   19539 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.12
[preflight] Running pre-flight checks
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.149.133]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-2 localhost] and IPs [192.168.149.133 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-2 localhost] and IPs [192.168.149.133 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1013 10:42:48.939526   19539 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1013 10:42:48.941281   19539 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.010651 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-2 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 9od4xd.15l09jrrxa7qo3ny
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.149.133:6443 --token 9od4xd.15l09jrrxa7qo3ny \
    --discovery-token-ca-cert-hash sha256:fb23ab81f7b95b36595dfb44ee7aab865aac7671a416b57f9cb2461f45823ea1

按照紅色區域執行命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

8、需要部署一個 Pod Network 到叢集中,此處選擇 flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

此時直接將 raw.githubusercontent.com 寫入到 /etc/hosts中

echo "199.232.28.133 raw.githubusercontent.com" >> /etc/hosts

再次執行上述命令,執行成功

檢視叢集狀態:

kubectl cluster-info

9、檢查是否搭建成功

# 檢視所有 node結點
kubectl get nodes
#檢視所有名稱空間中的 pod
kubectl get pods –all-namespaces

如果初始化過程出現問題,使用如下命令重置

kubeadm reset
rm -rf /var/lib/cni/
rm -f $HOME/.kube/config

10、初始化 k8s 叢集 worker node

本示例主要是部署一個master 結點,要再新增 worker,可以參見參考資料中的文章。

11、部署過程中遇到的問題

(1) 部署 docker 完之後,沒有去修改它的驅動,導致在部署 kubernetes 啟動 kubelet 失敗(通過 systemctl status kubelet 檢視服務狀態)

通過 docker info 命令看到預設驅動是 cgroupfs

所以一定要執行以上的 配置 docker 步驟,否則得返回去再去修改 docker 的驅動

在 /etc/docker/daemon.json 檔案中增加如下配置:
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl restart docker

重啟 docker 之後,再次檢視 docker info 資訊,

參考資料:

https://www.cnblogs.com/bluersw/p/11713468.html

https://www.jianshu.com/p/832bcd89bc07

三、kubernetes 解除安裝及清理

kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd