本文已参加「新人创造礼」活动,一同开启创造之路
- SIG: 特别兴趣小组 Special Interest Groups
- TOC:技术监督委员会 Technical Oversight Committee
- CNCF:云原生基金会 Cloud Native Computing Foundation
- K8S资源运用情况的衡量(如容器的 CPU 和内存运用)可以经过 Metrics API 获取
- metrics-server 经过调用 Kubelet Summary API 获取数据
- metrics-server 供给Node和Pod的CPU、内存运用情况,而其他Custom Metrics由Prometheus等组件来监控
- metrics-server 只可以查询当前的衡量数据,并不保存历史数据
- metrics-server 经过 Aggregator插件机制,同 kube-apiserver 一同对外供给服务
- master 节点需求布置 kubelet 和 kube-proxy,mertics-server 组件才能够正常布置
一、下载 Metrics-Server
[root@master1 ~]# cd /opt/install && mkdir -p metrics-server
[root@master1 ~]# cd /opt/install/metrics-server
[root@master1 metrics-server]# wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml
[root@master1 metrics-server]# ll
总用量 8
-rw-r--r-- 1 root root 4181 2月 10 00:25 components.yaml
[root@master1 metrics-server]# docker pull bitnami/metrics-server:0.6.1
0.6.1: Pulling from bitnami/metrics-server
83e3b3778d70: Pull complete
84780ccb9931: Pull complete
881c71882868: Pull complete
e7559fd8f2cc: Pull complete
46f5413f79fb: Pull complete
83010507f4e3: Pull complete
a31c3b7e4446: Pull complete
Digest: sha256:f2364867e58250832ac056a6cc360d73fe87543ed5a08218bd62b80263391293
Status: Downloaded newer image for bitnami/metrics-server:0.6.1
docker.io/bitnami/metrics-server:0.6.1
[root@master1 metrics-server]# docker tag bitnami/metrics-server:0.6.1 harbor.demo/k8s/metrics-server:0.6.1
[root@master1 metrics-server]# docker push harbor.demo/k8s/metrics-server:0.6.1
The push refers to repository [harbor.demo/k8s/metrics-server]
5f54aa4560d3: Pushed
f5de560d46bc: Pushed
b9f9f076e21d: Pushed
c02416026c66: Pushed
6816a2cafad5: Pushed
3c10a61517e5: Pushed
4c16ec6258b6: Pushed
0.6.1: digest: sha256:f2364867e58250832ac056a6cc360d73fe87543ed5a08218bd62b80263391293 size: 1785
[root@master1 metrics-server]# kubectl top nodes
error: Metrics API not available
[root@master1 metrics-server]# kubectl top pods
error: Metrics API not available
[root@master1 metrics-server]#
- 下载地址 github.com/kubernetes-…
- 镜像下载 docker pull k8s.gcr.io/metrics-server/metrics-server:v0.6.1 失利,这里用的对错官方的,有好的解决办法请留言,多谢
- 检查装备参数 docker run –rm harbor.demo/k8s/metrics-server:0.6.1 –help
二、修正 Metrics-Server 装备文件
[root@master1 ~]# cd /opt/install/metrics-server
[root@master1 metrics-server]# cp components.yaml components.yaml.backup
[root@master1 metrics-server]# vi components.yaml
[root@master1 metrics-server]# diff metrics-server-deployment.yaml.backup metrics-server-deployment.yaml
140c140,141
< image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
---
> - --requestheader-allowed-names=k8s-demo-aggregator
> image: harbor.demo/k8s/metrics-server:0.6.1
165a167,169
> limits:
> cpu: 500m
> memory: 500Mi
[root@master1 metrics-server]#
- 容器镜像地址改为私有库房地址:harbor.demo/k8s/metrics-server:0.6.1
- 拉取容器镜像的策略为IfNotPresent(一般开发环境用Always合作latest,持续更新)
- 添加命令行参数,k8s-demo-aggregator 是在aggregator-client-csr.json中装备的CN字段值
- metrics-server-deployment.yaml 由以下9部分组成,如果下载失利,可以直接复制粘贴:
1、ServiceAccount : metrics-server
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
2、ClusterRole :system:aggregated-metrics-reader
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
3、ClusterRole :system:metrics-server
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
4、RoleBinding :metrics-server-auth-reader
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
5、ClusterRoleBinding :metrics-server:system:auth-delegator
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
6、ClusterRoleBinding :system:metrics-server
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
7、Service :metrics-server
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
8、Deployment :metrics-server
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --requestheader-allowed-names=k8s-demo-aggregator
image: harbor.demo/k8s/metrics-server:0.6.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
limits:
cpu: 500m
memory: 500Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
9、APIService :v1beta1.metrics.k8s.io
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
三、布置 Metrics-Server 服务
- 布置Metrics-Server
[root@master1 ~]# cd /opt/install/metrics-server
[root@master1 metrics-server]# kubectl apply -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
[root@master1 metrics-server]#
- 检查 Metrics api 服务
[root@master1 ~]# kubectl get apiservices.apiregistration.k8s.io | grep metrics
v1beta1.metrics.k8s.io kube-system/metrics-server True 65s
[root@master1 ~]# kubectl get apiservice v1beta1.metrics.k8s.io -o yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apiregistration.k8s.io/v1","kind":"APIService","metadata":{"annotations":{},"labels":{"k8s-app":"metrics-server"},"name":"v1beta1.metrics.k8s.io"},"spec":{"group":"metrics.k8s.io","groupPriorityMinimum":100,"insecureSkipTLSVerify":true,"service":{"name":"metrics-server","namespace":"kube-system"},"version":"v1beta1","versionPriority":100}}
creationTimestamp: "2022-06-02T08:41:11Z"
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
resourceVersion: "147929"
uid: 9a3f644c-7484-429d-90c6-9279cabe2f5b
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
port: 443
version: v1beta1
versionPriority: 100
status:
conditions:
- lastTransitionTime: "2022-06-02T08:41:51Z"
message: all checks passed
reason: Passed
status: "True"
type: Available
[root@master1 ~]#
四、验证和测试
- 检查节点资源使用信息
[root@master1 ~]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master1 286m 3% 2523Mi 32%
master2 230m 2% 1277Mi 16%
master3 228m 2% 1527Mi 19%
node1 134m 1% 728Mi 9%
node2 125m 1% 949Mi 12%
[root@master1 ~]#
- 检查Pod资源使用信息
[root@master1 ~]# kubectl top pod -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default pingtest-ip-pool-1-677bd7dc78-ph6r4 0m 0Mi
default pingtest-ip-pool-1-677bd7dc78-x5pfk 0m 0Mi
default pingtest-ip-pool-2-5f7bb9f589-7mdg4 0m 0Mi
default pingtest-ip-pool-2-5f7bb9f589-lx7c7 0m 0Mi
kube-system coredns-799bc9dbc6-pczfh 2m 23Mi
kube-system k8s-demo-calico-node-app-cgjrq 41m 88Mi
kube-system k8s-demo-calico-node-app-k8x88 43m 92Mi
kube-system k8s-demo-calico-node-app-mzqlr 42m 89Mi
kube-system k8s-demo-calico-node-app-tsnw2 39m 91Mi
kube-system k8s-demo-calico-node-app-x4grj 52m 143Mi
kube-system k8s-demo-calico-typha-app-5d4b9f9f88-9ns5w 4m 28Mi
kube-system k8s-demo-calico-typha-app-5d4b9f9f88-hdchf 4m 25Mi
kube-system k8s-demo-calico-typha-app-5d4b9f9f88-rs8vt 4m 23Mi
kube-system metrics-server-79f79cfd87-fljq9 5m 30Mi
[root@master1 ~]#
- 经过命令行获取节点CPU、内存资源使用信息
[root@master1 ~]# kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes/node2" | jq
{
"kind": "NodeMetrics",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"name": "node2",
"creationTimestamp": "2022-06-02T08:45:26Z",
"labels": {
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/os": "linux",
"kubernetes.io/arch": "amd64",
"kubernetes.io/hostname": "node2",
"kubernetes.io/os": "linux",
"node-role.kubernetes.io/node": ""
}
},
"timestamp": "2022-06-02T08:45:16Z",
"window": "20.032s",
"usage": {
"cpu": "118301006n",
"memory": "973588Ki"
}
}
[root@master1 ~]#
参阅
- landscape.cncf.io/
- glossary.cncf.io/
- github.com/cncf
- github.com/kubernetes-…
- github.com/kubernetes/…
- www.cncf.io/people/tech…
- github.com/kubernetes-…
- blog.csdn.net/weixin_4338…
- www.cnblogs.com/tchua/p/108…
- blog.ljmict.com/?p=98
祝小朋友们和从前的小朋友们节日快乐~~~
- 先用起来,经过操作实践知道kubernetes(k8s),堆集多了天然就理解了
- 把理解的常识分享出来,自造福田,自得福缘
- 追求简略,简单使人理解,常识的上下文也是常识的一部分,例如版本,时刻等
- 欢迎留言交流,也可以提出问题,一般在周末回复和完善文档
- Jason@vip.qq.com 2022-6-1