linux運維、架構之路-kubeadm快速部署kubernetes叢集
阿新 • • 發佈:2020-09-01
一、介紹
# 建立一個 Master 節點
$ kubeadm init
# 將一個 Node 節點加入到當前叢集中
$ kubeadm join <Master節點的IP和埠 >
二、kubernetes架構圖
三、部署k8s叢集
1、基礎環境
- 作業系統: CentOS7.x-86_x64
- 硬體配置:2GB或更多RAM,2個CPU或更多CPU,硬碟30GB或更多
- 禁止swap分割槽
2、伺服器規劃
IP | |
---|---|
k8s-master |
192.168.56.61 |
k8s-node1 |
3、系統初始化
#關閉防火牆: systemctl stop firewalld systemctl disable firewalld #關閉selinux: sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久 setenforce 0 # 臨時 #關閉swap: swapoff -a # 臨時 #vim /etc/fstab # 永久 #設定主機名: hostnamectl set-hostname <hostname> #在master新增hosts: cat>> /etc/hosts << EOF 192.168.56.61 k8s-master 192.168.56.62 k8s-node1 EOF #將橋接的IPv4流量傳遞到iptables的鏈: cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system # 生效 #時間同步: yum install ntpdate -y ntpdate time.windows.com
4、
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo yum -y install docker-ce-18.06.1.ce-3.el7 systemctl enable docker && systemctl start docker docker --version cat > /etc/docker/daemon.json << EOF { "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"] } EOF
②
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
③
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0 systemctl enable kubelet
④
Master節點執行
kubeadm init \ --apiserver-advertise-address=192.168.56.61 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.18.0 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16 \ --ignore-preflight-errors=all
使用配置檔案引導
vi kubeadm.conf apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.18.0 imageRepository: registry.aliyuncs.com/google_containers networking: podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 kubeadm init --config kubeadm.conf ignore-preflight-errors=all
⑤配置使用kubectl工具
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl get nodes
⑥
Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.56.61:6443 --token 94kw30.b1gswshp2grv5vgd \ --discovery-token-ca-cert-hash sha256:0497a78ea746f2c1f48d67f3dca9d65cb4010868f22f2a0bbefb101d74c6f057
預設token有效期為24小時,當過期之後,該token就不可用了。這時就需要重新建立token,操作如下:
kubeadm token create kubeadm token list openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 0497a78ea746f2c1f48d67f3dca9d65cb4010868f22f2a0bbefb101d74c6f057 kubeadm join 192.168.56.61:6443 --token 94kw30.b1gswshp2grv5vgd --discovery-token-ca-cert-hash sha256:0497a78ea746f2c1f48d67f3dca9d65cb4010868f22f2a0bbefb101d74c6f057
5、部署
①
Calico是一個純三層的資料中心網路方案,Calico支援廣泛的平臺,包括Kubernetes、OpenStack等。
Calico 在每一個計算節點利用 Linux Kernel 實現了一個高效的虛擬路由器( vRouter) 來負責資料轉發,而每個 vRouter 通過 BGP 協議負責把自己上執行的 workload 的路由資訊向整個 Calico 網路內傳播。此外,Calico 專案還實現了 Kubernetes 網路策略,提供ACL功能。
文件地址:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
修改calico.yaml
- 定義Pod網路(CALICO_IPV4POOL_CIDR),與前面pod CIDR配置一樣
- 選擇工作模式(CALICO_IPV4POOL_IPIP),支援**BGP(Never)**、**IPIP(Always)**、**CrossSubnet**(開啟BGP並支援跨子網)
- name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16" - name: CALICO_IPV4POOL_VXLAN value: "Never"
部署Calico
kubectl apply -f calico.yaml [root@k8s-master ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-59877c7fb4-z2bms 1/1 Running 0 6m59s calico-node-pnjxq 1/1 Running 0 6m59s calico-node-v48jq 1/1 Running 0 6m59s coredns-7ff77c879f-dqk8t 1/1 Running 0 23m coredns-7ff77c879f-j8zsp 1/1 Running 0 23m etcd-k8s-master 1/1 Running 0 23m kube-apiserver-k8s-master 1/1 Running 0 23m kube-controller-manager-k8s-master 1/1 Running 0 23m kube-proxy-ck88h 1/1 Running 0 16m kube-proxy-hkb9f 1/1 Running 0 23m kube-scheduler-k8s-master 1/1 Running 0 23m
②
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.11.0-amd64#g" kube-flannel.yml
6、
- 建立一個Pod,驗證Pod工作
- 驗證Pod網路通訊
- 驗證DNS解析
①檢視叢集狀態
kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 28m v1.18.0 k8s-node1 Ready <none> 21m v1.18.0
②建立應用
kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 --type=NodePort kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/nginx-f89759699-28gpp 1/1 Running 0 114s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 34m service/nginx NodePort 10.96.142.106 <none> 80:31233/TCP 73s
7、
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
預設Dashboard只能叢集內部訪問,修改Service為NodePort型別,暴露到外部:
kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard
修改後
kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30001 selector: k8s-app: kubernetes-dashboard
建立service account並繫結預設cluster-admin管理員叢集角色:
kubectl create serviceaccount dashboard-admin -n kube-system kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') #獲取token命令