欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  科技

015.Kubernetes二进制部署所有节点kubelet

程序员文章站 2022-05-29 09:21:21
一 部署 kubelet kubelet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。 kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统 ......

一 部署 kubelet

kubelet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 pod 容器,执行交互式命令,如 exec、run、logs 等。
kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。
为确保安全,部署时关闭了 kubelet 的非安全 http 端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster 的请求)。

1.1 安装kubelet

提示:k8smaster01节点已下载相应二进制,可直接分发至node节点。

1.2 分发kubelet

  1 [root@k8smaster01 ~]# cd /opt/k8s/work
  2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
  3 [root@k8smaster01 work]# for all_ip in ${all_ips[@]}
  4   do
  5     echo ">>> ${all_ip}"
  6     scp kubernetes/server/bin/kubelet root@${all_ip}:/opt/k8s/bin/
  7     ssh root@${all_ip} "chmod +x /opt/k8s/bin/*"
  8   done

1.3 分发kubeconfig

  1 [root@k8smaster01 ~]# cd /opt/k8s/work
  2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
  3 [root@k8smaster01 work]# for all_name in ${all_names[@]}
  4   do
  5     echo ">>> ${all_name}"
  6 
  7     # 创建 token
  8     export bootstrap_token=$(kubeadm token create \
  9       --description kubelet-bootstrap-token \
 10       --groups system:bootstrappers:${all_name} \
 11       --kubeconfig ~/.kube/config)
 12 
 13     # 设置集群参数
 14     kubectl config set-cluster kubernetes \
 15       --certificate-authority=/etc/kubernetes/cert/ca.pem \
 16       --embed-certs=true \
 17       --server=${kube_apiserver} \
 18       --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
 19 
 20     # 设置客户端认证参数
 21     kubectl config set-credentials kubelet-bootstrap \
 22       --token=${bootstrap_token} \
 23       --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
 24 
 25     # 设置上下文参数
 26     kubectl config set-context default \
 27       --cluster=kubernetes \
 28       --user=kubelet-bootstrap \
 29       --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
 30 
 31     # 设置默认上下文
 32     kubectl config use-context default --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
 33   done
解释:
向 kubeconfig 写入的是 token,bootstrap 结束后 kube-controller-manager 为 kubelet 创建 client 和 server 证书。
token 有效期为 1 天,超期后将不能再被用来 boostrap kubelet,且会被 kube-controller-manager 的 tokencleaner 清理;
kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:<token id>,group 设置为 system:bootstrappers,后续将为这个 group 设置 clusterrolebinding。
  1 [root@k8smaster01 work]# kubeadm token list --kubeconfig ~/.kube/config		#查看 kubeadm 为各节点创建的 token
  2 [root@k8smaster01 work]# kubectl get secrets  -n kube-system|grep bootstrap-token	#查看各 token 关联的 secret
015.Kubernetes二进制部署所有节点kubelet

1.5 分发bootstrap kubeconfig

  1 [root@k8smaster01 ~]# cd /opt/k8s/work
  2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
  3 [root@k8smaster01 work]# for all_name in ${all_names[@]}
  4   do
  5     echo ">>> ${all_name}"
  6     scp kubelet-bootstrap-${all_name}.kubeconfig root@${all_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
  7   done

1.6 创建kubelet 参数配置文件

从 v1.10 开始,部分 kubelet 参数需在配置文件中配置,建议创建kubelet配置文件。
  1 [root@k8smaster01 ~]# cd /opt/k8s/work
  2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
  3 [root@k8smaster01 work]# cat > kubelet-config.yaml.template <<eof
  4 kind: kubeletconfiguration
  5 apiversion: kubelet.config.k8s.io/v1beta1
  6 address: "##all_ip##"
  7 staticpodpath: ""
  8 syncfrequency: 1m
  9 filecheckfrequency: 20s
 10 httpcheckfrequency: 20s
 11 staticpodurl: ""
 12 port: 10250
 13 readonlyport: 0
 14 rotatecertificates: true
 15 servertlsbootstrap: true
 16 authentication:
 17   anonymous:
 18     enabled: false
 19   webhook:
 20     enabled: true
 21   x509:
 22     clientcafile: "/etc/kubernetes/cert/ca.pem"
 23 authorization:
 24   mode: webhook
 25 registrypullqps: 0
 26 registryburst: 20
 27 eventrecordqps: 0
 28 eventburst: 20
 29 enabledebugginghandlers: true
 30 enablecontentionprofiling: true
 31 healthzport: 10248
 32 healthzbindaddress: "##all_ip##"
 33 clusterdomain: "${cluster_dns_domain}"
 34 clusterdns:
 35   - "${cluster_dns_svc_ip}"
 36 nodestatusupdatefrequency: 10s
 37 nodestatusreportfrequency: 1m
 38 imageminimumgcage: 2m
 39 imagegchighthresholdpercent: 85
 40 imagegclowthresholdpercent: 80
 41 volumestatsaggperiod: 1m
 42 kubeletcgroups: ""
 43 systemcgroups: ""
 44 cgrouproot: ""
 45 cgroupsperqos: true
 46 cgroupdriver: cgroupfs
 47 runtimerequesttimeout: 10m
 48 hairpinmode: promiscuous-bridge
 49 maxpods: 220
 50 podcidr: "${cluster_cidr}"
 51 podpidslimit: -1
 52 resolvconf: /etc/resolv.conf
 53 maxopenfiles: 1000000
 54 kubeapiqps: 1000
 55 kubeapiburst: 2000
 56 serializeimagepulls: false
 57 evictionhard:
 58   memory.available:  "100mi"
 59 nodefs.available:  "10%"
 60 nodefs.inodesfree: "5%"
 61 imagefs.available: "15%"
 62 evictionsoft: {}
 63 enablecontrollerattachdetach: true
 64 failswapon: true
 65 containerlogmaxsize: 20mi
 66 containerlogmaxfiles: 10
 67 systemreserved: {}
 68 kubereserved: {}
 69 systemreservedcgroup: ""
 70 kubereservedcgroup: ""
 71 enforcenodeallocatable: ["pods"]
 72 eof

1.7 分发kubelet 参数配置文件

  1 [root@k8smaster01 ~]# cd /opt/k8s/work
  2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
  3 [root@k8smaster01 work]# for all_ip in ${all_ips[@]}
  4   do
  5     echo ">>> ${all_ip}"
  6     sed -e "s/##all_ip##/${all_ip}/" kubelet-config.yaml.template > kubelet-config-${all_ip}.yaml.template
  7     scp kubelet-config-${all_ip}.yaml.template root@${all_ip}:/etc/kubernetes/kubelet-config.yaml
  8   done

1.8 创建kubelet systemd

  1 [root@k8smaster01 ~]# cd /opt/k8s/work
  2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
  3 [root@k8smaster01 work]# cat > kubelet.service.template <<eof
  4 [unit]
  5 description=kubernetes kubelet
  6 documentation=https://github.com/googlecloudplatform/kubernetes
  7 after=docker.service
  8 requires=docker.service
  9 
 10 [service]
 11 workingdirectory=${k8s_dir}/kubelet
 12 execstart=/opt/k8s/bin/kubelet \\
 13   --allow-privileged=true \\
 14   --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
 15   --cert-dir=/etc/kubernetes/cert \\
 16   --cni-conf-dir=/etc/cni/net.d \\
 17   --container-runtime=docker \\
 18   --container-runtime-endpoint=unix:///var/run/dockershim.sock \\
 19   --root-dir=${k8s_dir}/kubelet \\
 20   --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
 21   --config=/etc/kubernetes/kubelet-config.yaml \\
 22   --hostname-override=##all_name## \\
 23   --pod-infra-container-image=registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 \\
 24   --image-pull-progress-deadline=15m \\
 25   --volume-plugin-dir=${k8s_dir}/kubelet/kubelet-plugins/volume/exec/ \\
 26   --logtostderr=true \\
 27   --v=2
 28 restart=always
 29 restartsec=5
 30 startlimitinterval=0
 31 
 32 [install]
 33 wantedby=multi-user.target
 34 eof
解释:
  • 如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 node 的情况;
  • --bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 tls bootstrapping 请求;
  • k8s approve kubelet 的 csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件;
  • --pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 镜像,它不能回收容器的僵尸。

1.9 分发kubelet systemd

  1 [root@k8smaster01 ~]# cd /opt/k8s/work
  2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
  3 [root@k8smaster01 work]# for all_name in ${all_names[@]}
  4   do
  5     echo ">>> ${all_name}"
  6     sed -e "s/##all_name##/${all_name}/" kubelet.service.template > kubelet-${all_name}.service
  7     scp kubelet-${all_name}.service root@${all_name}:/etc/systemd/system/kubelet.service
  8   done

二 启动验证

2.1 授权

kubelet 启动时查找 --kubeletconfig 参数对应的文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 发送证书签名请求 (csr)。
kube-apiserver 收到 csr 请求后,对其中的 token 进行认证,认证通过后将请求的 user 设置为 system:bootstrap:<token id>,group 设置为 system:bootstrappers,这一过程称为 bootstrap token auth。
默认情况下,这个 user 和 group 没有创建 csr 的权限,因此kubelet 会启动失败,可通过如下方式创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定。

  1 [root@k8smaster01 ~]#  kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

2.2 启动kubelet

  1 [root@k8smaster01 ~]# source /opt/k8s/bin/environment.sh
  2 [root@k8smaster01 ~]# for all_name in ${all_names[@]}
  3   do
  4     echo ">>> ${all_name}"
  5     ssh root@${all_name} "mkdir -p ${k8s_dir}/kubelet/kubelet-plugins/volume/exec/"
  6     ssh root@${all_name} "/usr/sbin/swapoff -a"
  7     ssh root@${all_name} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
  8   done
kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 csr 请求,当这个 csr 被 approve 后,kube-controller-manager 为 kubelet 创建 tls 客户端证书、私钥和 --kubeletconfig 文件。
注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 参数,才会为 tls bootstrap 创建证书和私钥。
提示:
启动服务前必须先创建工作目录;
关闭 swap 分区,否则 kubelet 会启动失败。

2.3 查看kubelet服务

  1 [root@k8smaster01 ~]# source /opt/k8s/bin/environment.sh
  2 [root@k8smaster01 ~]# for all_name in ${all_names[@]}
  3   do
  4     echo ">>> ${all_name}"
  5     ssh root@${all_name} "systemctl status kubelet"
  6   done
  7 [root@k8snode01 ~]# kubectl get csr
  8 [root@k8snode01 ~]# kubectl get nodes
015.Kubernetes二进制部署所有节点kubelet

三 approve csr 请求

3.1 自动 approve csr 请求

创建三个 clusterrolebinding,分别用于自动 approve client、renew client、renew server 证书。
  1 [root@k8snode01 ~]# cd /opt/k8s/work
  2 [root@k8snode01 work]# cat > csr-crb.yaml <<eof
  3  # approve all csrs for the group "system:bootstrappers"
  4  kind: clusterrolebinding
  5  apiversion: rbac.authorization.k8s.io/v1
  6  metadata:
  7    name: auto-approve-csrs-for-group
  8  subjects:
  9  - kind: group
 10    name: system:bootstrappers
 11    apigroup: rbac.authorization.k8s.io
 12  roleref:
 13    kind: clusterrole
 14    name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
 15    apigroup: rbac.authorization.k8s.io
 16 ---
 17  # to let a node of the group "system:nodes" renew its own credentials
 18  kind: clusterrolebinding
 19  apiversion: rbac.authorization.k8s.io/v1
 20  metadata:
 21    name: node-client-cert-renewal
 22  subjects:
 23  - kind: group
 24    name: system:nodes
 25    apigroup: rbac.authorization.k8s.io
 26  roleref:
 27    kind: clusterrole
 28    name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
 29    apigroup: rbac.authorization.k8s.io
 30 ---
 31 # a clusterrole which instructs the csr approver to approve a node requesting a
 32 # serving cert matching its client cert.
 33 kind: clusterrole
 34 apiversion: rbac.authorization.k8s.io/v1
 35 metadata:
 36   name: approve-node-server-renewal-csr
 37 rules:
 38 - apigroups: ["certificates.k8s.io"]
 39   resources: ["certificatesigningrequests/selfnodeserver"]
 40   verbs: ["create"]
 41 ---
 42  # to let a node of the group "system:nodes" renew its own server credentials
 43  kind: clusterrolebinding
 44  apiversion: rbac.authorization.k8s.io/v1
 45  metadata:
 46    name: node-server-cert-renewal
 47  subjects:
 48  - kind: group
 49    name: system:nodes
 50    apigroup: rbac.authorization.k8s.io
 51  roleref:
 52    kind: clusterrole
 53    name: approve-node-server-renewal-csr
 54    apigroup: rbac.authorization.k8s.io
 55 eof
 56 [root@k8snode01 work]# kubectl apply -f csr-crb.yaml
解释:
auto-approve-csrs-for-group:自动 approve node 的第一次 csr; 注意第一次 csr 时,请求的 group 为 system:bootstrappers;
node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 group 为 system:nodes;
node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 group 为 system:nodes。

3.2 查看 kubelet 的情况

  1 [root@k8snode01 ~]# kubectl get csr | grep boot		#等待一段时间(1-10 分钟),三个节点的 csr 都被自动 approved
  2 [root@k8snode01 ~]# kubectl get nodes			#所有节点均 ready
  3 [root@k8snode01 ~]# ls -l /etc/kubernetes/kubelet.kubeconfig
  4 [root@k8snode01 ~]# ls -l /etc/kubernetes/cert/|grep kubelet
015.Kubernetes二进制部署所有节点kubelet

3.3 手动 approve server cert csr

基于安全性考虑,csr approving controllers 不会自动 approve kubelet server 证书签名请求,需要手动 approve。
  1 [root@k8smaster01 ~]# kubectl get csr
  2 [root@k8smaster01 ~]# kubectl certificate approve csr-2kmtj
  3 
015.Kubernetes二进制部署所有节点kubelet
  1 [root@k8smaster01 ~]# ls -l /etc/kubernetes/cert/kubelet-*
015.Kubernetes二进制部署所有节点kubelet

四 kubelet api 接口

4.1 kubelet 提供的 api 接口

  1 [root@k8smaster01 ~]# sudo netstat -lnpt|grep kubelet			#查看kubelet监听端口
015.Kubernetes二进制部署所有节点kubelet
解释:
  • 10248: healthz http 服务;
  • 10250: https 服务,访问该端口时需要认证和授权(即使访问 /healthz 也需要);
  • 未开启只读端口 10255;
  • 从 k8s v1.10 开始,去除了 --cadvisor-port 参数(默认 4194 端口),不支持访问 cadvisor ui & api。

4.2 kubelet api 认证和授权

kubelet 配置了如下认证参数:
  • authentication.anonymous.enabled:设置为 false,不允许匿名�访问 10250 端口;
  • authentication.x509.clientcafile:指定签名客户端证书的 ca 证书,开启 https 证书认证;
  • authentication.webhook.enabled=true:开启 https bearer token 认证。
同时配置了如下授权参数:
authroization.mode=webhook:开启 rbac 授权。

kubelet 收到请求后,使用 clientcafile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 unauthorized。
  1 [root@k8smaster01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://172.24.8.71:10250/metrics   
  2 unauthorized[root@k8smaster01 ~]#
  3 [root@k8smaster01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem -h "authorization: bearer 123456" https://172.24.8.71:10250/metrics   
  4 unauthorized
若通过认证后,kubelet 使用 subjectaccessreview api 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(rbac)。

4.3 证书认证和授权

  1 [root@k8smaster01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://172.24.8.71:10250/metrics	#默认权限不足
  2 forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)
  3 curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.24.8.71:10250/metrics|head				#使用最高权限的admin
015.Kubernetes二进制部署所有节点kubelet
解释:
--cacert、--cert、--key 的参数值必须是文件路径,如上面的 ./admin.pem 不能省略 ./,否则返回 401 unauthorized。

4.4 创建bear token 认证和授权

  1 [root@k8smaster01 ~]# kubectl create sa kubelet-api-test
  2 [root@k8smaster01 ~]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
  3 [root@k8smaster01 ~]# secret=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
  4 [root@k8smaster01 ~]# token=$(kubectl describe secret ${secret} | grep -e '^token' | awk '{print $2}')
  5 [root@k8smaster01 ~]# echo ${token}
015.Kubernetes二进制部署所有节点kubelet
  1 [root@k8smaster01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem -h "authorization: bearer ${token}" https://172.24.8.71:10250/metrics|head
015.Kubernetes二进制部署所有节点kubelet

4.5 cadvisor 和 metrics

cadvisor 是内嵌在 kubelet 二进制中的,统计所在节点各容器的资源(cpu、内存、磁盘、网卡)使用情况的服务。
浏览器访问 https://172.24.8.71:10250/metrics 和 https://172.24.8.71:10250/metrics/cadvisor 分别返回 kubelet 和 cadvisor 的 metrics。
注意:
kubelet.config.json 设置 authentication.anonymous.enabled 为 false,不允许匿名证书访问 10250 的 https 服务;
参考https://github.com/opsnull/follow-me-install-kubernetes-cluster/blob/master/a.%e6%b5%8f%e8%a7%88%e5%99%a8%e8%ae%bf%e9%97%aekube-apiserver%e5%ae%89%e5%85%a8%e7%ab%af%e5%8f%a3.md,创建和导入相关证书,然后访问上面的 10250 端口。