Kubernetes 资源限制之 ResourceQuota

1 简介

Kubernetes提供了两种资源限制的方式:ResourceQuotaLimitRange

其中ResourceQuota是针对namespace做的资源限制,而LimitRange是针对namespace中的每个组件做的资源限制。

当多个namespace共用同一个集群的时候可能会有某一个namespace使用的资源配额超过其公平配额,导致其他namespace的资源被占用。
这个时候我们可以为每个namespace创建一个ResourceQuota,

用户在namespace中创建资源时,quota 配额系统跟踪使用情况,以确保不超过ResourceQuota的限制值。
如果创建或更新资源违反配额约束,则HTTP状态代码将导致请求失败403 FORBIDDEN。
资源配额的更改不会影响到已经创建的pod。

apiserver的启动参数通常kubernetes默认启用了ResourceQuota,在apiserver的启动参数–enable-admission-plugins=中如果有ResourceQuota便为启动。

2 使用

创建一个测试使用的NS

1
2
[~] kubectl create ns testquota
namespace/testquota created
1
2
[~] kubectl get ns | grep quota
testquota Active 3m41s

创建一个ResourceQuota

1
2
3
4
5
6
7
8
9
10
11
12
13
[yaml] cat resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: testquota-resources
namespace: testquota
spec:
hard:
pods: "4"
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
1
2
[yaml] kubectl apply -f resourcequota.yaml
resourcequota/testquota-resources created
1
2
3
4
5
6
7
8
9
10
[yaml] kubectl describe resourcequotas -n testquota testquota-resources
Name: testquota-resources
Namespace: testquota
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
pods 0 4
requests.cpu 0 1
requests.memory 0 1Gi

创建一个Deployment并限制资源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[yaml] cat quota-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: testquota
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
resources:
requests:
memory: "100Mi"
cpu: "100m"
limits:
memory: "200Mi"
cpu: "500m"
[yaml] kubectl apply -f quota-deploy.yaml
deployment.apps/nginx-deployment created
[yaml] kubectl get po -n testquota
NAME READY STATUS RESTARTS AGE
nginx-deployment-7c6bbc77d8-mfxnl 1/1 Running 0 9s

修改deployment副本数,使使用的总资源超过ResourceQuota中定义的资源

首先查看当前资源的使用情况

1
2
3
4
5
6
7
8
9
10
[yaml] kubectl describe resourcequotas -n testquota testquota-resources
Name: testquota-resources
Namespace: testquota
Resource Used Hard
-------- ---- ----
limits.cpu 500m 2
limits.memory 200Mi 2Gi
pods 1 4
requests.cpu 100m 1
requests.memory 100Mi 1Gi

修改副本数

1
2
3
4
5
6
7
8
[yaml] kubectl scale deployment -n testquota nginx-deployment --replicas=4
deployment.apps/nginx-deployment scaled
[yaml] kubectl get po -n testquota
NAME READY STATUS RESTARTS AGE
nginx-deployment-7c6bbc77d8-5mbc6 1/1 Running 0 7s
nginx-deployment-7c6bbc77d8-ld69h 1/1 Running 0 7s
nginx-deployment-7c6bbc77d8-mfxnl 1/1 Running 0 5m18s
nginx-deployment-7c6bbc77d8-sdcxb 1/1 Running 0 7s

当前资源使用情况

1
2
3
4
5
6
7
8
9
10
[yaml] kubectl describe resourcequotas -n testquota testquota-resources
Name: testquota-resources
Namespace: testquota
Resource Used Hard
-------- ---- ----
limits.cpu 2 2
limits.memory 800Mi 2Gi
pods 4 4
requests.cpu 400m 1
requests.memory 400Mi 1Gi

再创建一个deployment

1
2
3
4
5
6
[yaml] kubectl apply -f quota-deploy-2.yaml
deployment.apps/nginx2-deployment created
[yaml] kubectl get deployment -n testquota
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 4/4 4 4 7m48s
nginx2-deployment 0/1 0 0 34s

可以看到,虽然deployment创建成功了,但是却没有创建对应的pod,可以查看deployment报错

1
2
3
4
5
6
7
8
9
10
[yaml] kubectl describe deployments -n testquota nginx2-deployment
Name: nginx2-deployment
Namespace: testquota
...
Replicas: 1 desired | 0 updated | 0 total | 0 available | 1 unavailable
NewReplicaSet: nginx2-deployment-7c6bbc77d8 (0/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 98s deployment-controller Scaled up replica set nginx2-deployment-7c6bbc77d8 to 1

可见是由于已经创建的pod的总和已经超过的namespace总的资源总量限制值而导致的无法创建pod。

3 常见资源类型

资源名称 描述
limits.cpu namespace下所有pod的CPU限制总和
limits.memory 内存限制总和
requests.cpu CPU请求总和
requests.memory 内存限制总和
requests.storage PVC请求的存储值的总和
persistentvolumeclaims PVC的数量
requests.ephemeral-storage 本地临时存储请求总和
limits.ephemeral-storage 本地临时存储限制总和