Kubernetes PV 持久卷

1.PV 概念

1.1 什么是 PV:

PV 的全称是 PersistentVolume(持久化卷)。PersistentVolume 是 Volume 的一种类型,是对底层存储的一种抽象。PV 由集群管理员进行创建和配置,与 node一样,PV 也是属于集群级别的资源。PV 包含存储类型、存储大小和访问模式。PV 的生命周期独立于 Pod,即使用它的 Pod 被销毁时,PV 可以依然存在。

PersistentVolume 通过插件机制实现与共享存储的对接。Kubernetes 目前支持以下插件类型,其中 FlexVolume 和 CSI 是 Kubernetes 的标准插件,用于集成各云厂商的存储设备。

  • FlexVolume
  • CSI
  • NFS
  • RBD (Ceph Block Device)
  • CephFS
  • Glusterfs
  • HostPath
  • Local

1.2 PV 的生命周期:

ACCESS MODES PV的访问模式: 是用来对 PV 进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问权限包括以下几种方式:

  • ReadWriteOnce (RWO):读写权限,但是只能被单个节点挂载
  • ReadOnlyMany (ROX):只读权限,可以备多个节点挂载
  • ReadWriteMany (RWX):读写权限,可以被多个节点挂载

1.3 RECLAIM POLICY PV的回收策略:

目前 PV 支持的回收策略有三种:

  • Retain (保留):保留数据,需要管理员手工清理数据
  • Recycle (回收):清除 PV 中的数据,效果相当于执行 rm -rf /ifs/kubernetes
  • Delete (删除):与 PV 相连的后端存储同时删除

1.3 STATUS PV的状态:

一个 PV 的生命周期中,可能会处于4种不同的阶段:

  • Available (可用):表示可用状态,还未被任何 PVC 绑定
  • Bound (已绑定):表示 PV 已经被 PVC 绑定
  • Released (已释放): PVC 被删除,但是资源还未被集群重新生命
  • Failed (失败): 表示该 PV 的自动回收失败

2.什么是 PVC:

PVC 的全称是 PersistentVolumeClaim(持久化卷声明),PVC 是用户对存储资源的一种请求。PVC 和 Pod 比较类似,Pod 消耗的是节点资源,PVC 消耗的是 PV 资源。Pod 可以请求 CPU 和内存,而 PVC 可以请求特定的存储空间和访问模式。对于真正使用存储的用户不需要关心底层的存储实现细节,只需要直接使用 PVC 即可。

2.1 什么是 StorageClass:

由于不同的应用程序对于存储性能的要求也不尽相同,比如:读写速度、并发性能、存储大小等。如果只能通过 PVC 对 PV 进行静态申请,显然这并不能满足任何应用对于存储的各种需求。为了解决这一问题,Kubernetes 引入了一个新的资源对象:StorageClass,通过 StorageClass 的定义,集群管理员可以先将存储资源定义为不同类型的资源,比如快速存储、慢速存储、共享存储(多读多写)、块存储(单读单写)等。

当用户通过 PVC 对存储资源进行申请时,StorageClass 会使用 Provisioner(不同存储资源对应不同的 Provisioner)来自动创建用户所需 PV。这样应用就可以随时申请到合适的存储资源,而不用担心集群管理员没有事先分配好需要的 PV。

2.2 PV 供给方式1:静态供给

PV 静态供给方式,需要运维工程师提前创建一堆 PV ,供开发者使用。缺点是维护成本高。

2.2.1 Demo1:PV 的静态供给

nfs server 创建 pv目录供挂载使用

1
$ mkdir -p /nfs/kubernetes/pv{001,002,003}

创建3个 size 的 PV 供不同项目挂载使用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-demo-nfs1
namespace:
spec:
capacity:
storage: 5Gi # 5G
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /ifs/kubernetes/pv001 # index.html 为 pv001
server: 192.168.99.105

---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-demo-nfs2
namespace:
spec:
capacity:
storage: 10Gi # 10G
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /ifs/kubernetes/pv002 # index.html 为 pv002
server: 192.168.99.105

---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-demo-nfs3
namespace:
spec:
capacity:
storage: 20Gi # 20G
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /ifs/kubernetes/pv003 # index.html 为 pv003
server: 192.168.99.105

查看 PV

1
2
3
4
5
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-demo-nfs1 5Gi RWX Retain Available 8s
pv-demo-nfs2 10Gi RWX Retain Available 8s
pv-demo-nfs3 20Gi RWX Retain Available 8s

nfs client测试挂载nfs

1
2
3
$ yum install nfs-utils -y  # 所有 node 安装
$ mount -t nfs 192.168.100.200:/nfs/kubernetes /mnt
$ umount /mnt # 卸载挂载测试

创建 PVC 5G

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
apiVersion: v1
kind: Pod
metadata:
name: pod-demo-pvc1
namespace: default
spec:
containers:
- name: pod-demo-pvc1
image: nginx:1.16.1
ports:
- containerPort: 80
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html

volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: pvc-demo1

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-demo1
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi # 挂载 5G的 PV

报错

1
nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.

这个报错是因为pvc没有和pv关联起来,我这边的PVC是创建pod的时候自动创建的,可以手动把pv和PVC关联或者创建StorageClass让pv和PVC自动关联

在pv的yaml文件添加如下内容spec处添加

1
2
3
4
5
6
7
8
9
10
11
claimRef:
kind: PersistentVolumeClaim
namespace: 命名空间名
name: PVC名
spec:
accessModes:
- ReadWriteOnce
claimRef:
kind: PersistentVolumeClaim
namespace: 命名空间名
name: PVC名

添加后查看pv和PVC

其实是之前挂载过一次,但是异常,没挂载成功,此时PV就处于释放空闲状态。

1
2
# 删除PV
kubectl delete pv pv004

重新来一次就好了

此时我们如果想删除PVC,就必须先删除Pod才能删除PVC,因为PVC有受保护机制,无法直接删除。

测试内容,是挂载到 5G 的 PV

1
2
$ curl 10.244.36.126
pv001

创建 12G PVC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
apiVersion: v1
kind: Pod
metadata:
name: pod-demo-pvc2
namespace: default
spec:
containers:
- name: pod-demo-pvc2
image: nginx:1.16.1
ports:
- containerPort: 80
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html

volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: pvc-demo2

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-demo2
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 12Gi # 申请12G的PV

可以看到是 20G 的 PV 被挂载出去。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-demo-nfs1 5Gi RWX Retain Bound default/pvc-demo1 11m
persistentvolume/pv-demo-nfs2 10Gi RWX Retain Available 11m
persistentvolume/pv-demo-nfs3 20Gi RWX Retain Bound default/pvc-demo2 11m

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc-demo1 Bound pv-demo-nfs1 5Gi RWX 6m20s
persistentvolumeclaim/pvc-demo2 Bound pv-demo-nfs3 20Gi RWX 11s

# 测试
# 在nfs pv001 目录上
$ echo pv001 > /nfs/kubernetes/pv001/1.txt
$ curl 10.244.169.139/1.txt
pv001

PV 静态供给总结:

  • PVC 对 PV 的申请是根据 Size 向上申请,如果没有满足的 PV,Pod 会处于 Pending 等待状态,直到集群中有满足的 PV
  • 静态供给方式,需要运维工程师提前创建一堆 PV ,供开发者使用。缺点是维护成本高。

2.3 PV 供给方式2:动态供给(推荐)

PV 动态供给使用 StorageClass 对象实现。 默认不支持 NFS 类型的 PV,需要通过社区的插件进行实现。

2.3.1 Demo2:基于 NFS 的 PV动态供给

GitHub nfs-client

部署 nfs-client 所需的配置文件:

rbac.yaml
deployment.yaml
class.yaml

1
2
3
wget https://github.com/kubernetes-retired/external-storage/blob/master/nfs-client/deploy/rbac.yaml
wget https://github.com/kubernetes-retired/external-storage/blob/master/nfs-client/deploy/deployment.yaml
wget https://github.com/kubernetes-retired/external-storage/blob/master/nfs-client/deploy/class.yaml

保持默认:/opt/kubernetes/cfg/nfs-client/rbac.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io

修改配置:/opt/kubernetes/cfg/nfs-client/deployment.yaml

修改 NFS 服务器地址和共享目录,镜像地址如果下载困难,可以改成国内的
lizhenliang/nfs-client-provisioner:latest

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.100.184
- name: NFS_PATH
value: /data/nfs_data/nfs_client
volumes:
- name: nfs-client-root
nfs:
server: 192.168.100.184
path: /data/nfs_data/nfs_client

修改配置:/opt/kubernetes/cfg/nfs-client/class.yaml

修改参数 parameters.archiveOnDelete: “true”,PVC 持久卷是否归档。当改参数为 true 时,当删除 PVC 动态供给时,其持久卷将被重命名为 /data/nfs_data/nfs_client/archived-${NAMESPACE}-${PVC_NAME}-pvc-${VOLUME_ID} 进行归档,这个归档目录需要人工清理。

1
2
3
4
5
6
7
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "true"

部署插件 nfs-client

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cd /opt/kubernetes/cfg/nfs-client && kubectl apply -f . 
# 查看 storageClass 持久卷
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage fuseim.pri/ifs Delete Immediate false 6m53s

# 查看 nfs-client Pod
$ kubectl get pods -l app=nfs-client-provisioner
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-7b7646995b-kcrcf 1/1 Running 0 7m4s

# 虽然是 Running,但查看 logs后发现有错报
$ kubectl logs nfs-client-provisioner-7b7646995b-kcrcf
I0406 07:54:00.066739 1 leaderelection.go:185] attempting to acquire leader lease default/fuseim.pri-ifs...
E0406 07:54:17.589143 1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"fuseim.pri-ifs", GenerateName:"", Namespace:"default", SelfLink:"", UID:"123105fb-6655-41fc-97b8-e2c4b3841102", ResourceVersion:"31704880", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63784814678, loc:(*time.Location)(0x1956800)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"nfs-client-provisioner-7b7646995b-kcrcf_b816ab3b-b57e-11ec-b44e-3264c14d265d\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2022-04-06T07:54:17Z\",\"renewTime\":\"2022-04-06T07:54:17Z\",\"leaderTransitions\":2}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'nfs-client-provisioner-7b7646995b-kcrcf_b816ab3b-b57e-11ec-b44e-3264c14d265d became leader'
I0406 07:54:17.589344 1 leaderelection.go:194] successfully acquired lease default/fuseim.pri-ifs
I0406 07:54:17.589599 1 controller.go:631] Starting provisioner controller fuseim.pri/ifs_nfs-client-provisioner-7b7646995b-kcrcf_b816ab3b-b57e-11ec-b44e-3264c14d265d!
I0406 07:54:17.690180 1 controller.go:680] Started provisioner controller fuseim.pri/ifs_nfs-client-provisioner-7b7646995b-kcrcf_b816ab3b-b57e-11ec-b44e-3264c14d265d!
I0406 07:55:20.666604 1 controller.go:987] provision "default/pvc-sc" class "managed-nfs-storage": started
E0406 07:55:20.688072 1 controller.go:1004] provision "default/pvc-sc" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference

Kubernetes 创建 pvc error getting claim reference: selfLink was empty, can‘t make refere

在 kubernetes 1.20+ 版本,遇到 PVC 无法被创建,原因是默认禁用了 selfLink。

在二进制 /opt/kubernetes/cfg/kube-apiserver.conf 启动参数的末尾增加参数 –feature-gates=RemoveSelfLink=false,重启 kube-apiserver.service 解决。

这边是把参数写到服务里,在master 1 2 3 上修改

/usr/lib/systemd/system/kube-apiserver.service
systemctl daemon-reload
systemctl restart kube-apiserver

报错2

1
error: error validating "class.yaml": error validating data: [ValidationError(StorageClass.metadata): unknown field "parameters" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta, ValidationError(StorageClass.metadata): unknown field "provisioner" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta, ValidationError(StorageClass): missing required field "provisioner" in io.k8s.api.storage.v1.StorageClass]; if you choose to ignore these errors, turn validation off with --validate=false

低级错误,github文件下载不下来,我用复制的。没对齐配置文件,重新对齐就好。

3.Pod 使用 nfs StorageClass 创建 PVC

3.1.PK8S 集群节点安装 nfs-utils

1
yum install -y nfs-utils

3.2创建 PVC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
apiVersion: v1
kind: Pod
metadata:
name: pod-demo-sc
namespace: default
spec:
containers:
- name: pod-demo-sc
image: nginx:1.16.1
ports:
- containerPort: 80
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: pvc-sc

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-sc
namespace: default
spec:
storageClassName: managed-nfs-storage # storageClassName关键字:动态持久存储卷名称
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi # PVC 申请存储的 Size

3.1查看 PVC

1
2
3
4
5
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-sc Bound pvc-5c046797-553d-460d-8476-ffcdd8dfdd96 1Gi RWX managed-nfs-storage 99s

# 注意 VOLUME 名称为 pvc-5c046797-553d-460d-8476-ffcdd8dfdd96

到 nfs-server 的挂载目录下能看到 default-pvc…. 的同名目录,即为动态申请的持久卷目录。

1
2
3
4
5
6
7
8
9
10
# 登入pod
kubectl exec -it pod-demo-sc -- /bin/bash
# 测试写数据
cd /usr/share/nginx/html/
echo 1112222 > 1.txt
# 在nfs 查看数据
ls /nfs/kubernetes/default-pvc-sc-pvc-b1aba39b-7b7d-45f4-9716-50f075c30598

# 重启pod 测试数据是不是存在

1
2
3
4
$ ll /ifs/kubernetes/
total 4
drwxrwxrwx 2 root root 6 May 17 15:52 archived-default-test-claim-pvc-0c4ad205-d243-43ac-a315-8d4aaeabedc0
drwxrwxrwx 2 root root 24 May 17 16:14 default-pvc-sc-pvc-5c046797-553d-460d-8476-ffcdd8dfdd96

而 archived-default…. 前缀的目录是 PVC 删除后进行归档动态持久卷的目录,受 class.yaml 创建存储类配置文件的 archiveOnDelete: “true” 参数控制。

3.4 PV 动态供给总结:

  • 1.PV 与 PVC 什么关系?

一对一

  • 2.PV 与 PVC 怎么匹配的?

通过访问模式 accessModes 和存储空间 storage

  • 3.容量匹配策略是什么?

匹配就近的符合的容量(向上)

  • 4.存储容量是真的用于限制吗?

存储容量取决于后端存储,容量字段主要还是用于匹配。

4、关于StorageClass回收策略对数据的影响

4.1、第一种配置

archiveOnDelete: "false"  
reclaimPolicy: Delete   #默认没有配置,默认值为Delete

测试结果:

  • pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
  • sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
  • 删除PVC后,PV被删除且NFS Server对应数据被删除

4.2、第二种配置

archiveOnDelete: "false"  
reclaimPolicy: Retain  

测试结果:

  • 1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
  • 2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
  • 3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
  • 4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中

    4.3、第三种配置

    archiveOnDelete: “ture”
    reclaimPolicy: Retain

测试结果:

  • 1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
  • 2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
  • 3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
  • 4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中

    4.3、第四种配置

    archiveOnDelete: “ture”
    reclaimPolicy: Delete

测试结果:

  • 1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
  • 2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
  • 3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
  • 4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中

总结:除以第一种配置外,其他三种配置在PV/PVC被删除后数据依然保留

5 实现扩容

5.1 尝试扩容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 查看目前大小 1G
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-sc Bound pvc-1b34c518-cf9b-45d4-a5be-8136cf0c2097 1Gi RWX managed-nfs-storage 87m
# 修改大小为2G
vim demo.yaml
·····
storage: 2Gi # PVC 改成2G 原先1G
# 然后我们重新申明一下
kubectl apply -f demo.yaml
pod/pod-demo-sc configured
Error from server (Forbidden): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"PersistentVolumeClaim\",\"metadata\":{\"annotations\":{},\"name\":\"pvc-sc\",\"namespace\":\"default\"},\"spec\":{\"accessModes\":[\"ReadWriteMany\"],\"resources\":{\"requests\":{\"storage\":\"2Gi\"}},\"storageClassName\":\"managed-nfs-storage\"}}\n"}},"spec":{"resources":{"requests":{"storage":"2Gi"}}}}
to:
Resource: "/v1, Resource=persistentvolumeclaims", GroupVersionKind: "/v1, Kind=PersistentVolumeClaim"
Name: "pvc-sc", Namespace: "default"
for: "demo.yaml": persistentvolumeclaims "pvc-sc" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize

报错不允许扩容

5.2 NFS做后端存储支不支持PVC扩容?

支持动态扩容需要满足两个条件:

  • 后端底层存储支持卷扩展(后端存储保证足够资源)(上面已经支持了)
  • 需要在StorageClass对象中设置allowVolumeExpansion为true

我们由于是测试,申请的资源比较少,我们直接对StorageClass对象进行修改(class.yaml),如下:

1
2
3
4
# 查看storageclass ALLOWVOLUMEEXPANSION false 不允许
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage fuseim.pri/ifs Delete Immediate false 5h30m

修改 allowVolumeExpansion 为true 允许扩容

1
2
3
4
5
6
7
8
9
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
allowVolumeExpansion: true
parameters:
archiveOnDelete: "true"

然后我们重新申明一下

1
2
3
4
5
6
# kubectl apply -f class.yaml
storageclass.storage.k8s.io/nfs-client-storageclass configured
# 查看StorageClass ALLOWVOLUMEEXPANSION 为true
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage fuseim.pri/ifs Delete Immediate true 5h32m

5.3 尝试扩容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
kubectl apply -f demo.yaml 
pod/pod-demo-sc configured
persistentvolumeclaim/pvc-sc configured
# 发现并没有成功
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-sc Bound pvc-1b34c518-cf9b-45d4-a5be-8136cf0c2097 1Gi RWX managed-nfs-storage 92m
# 查看详情
kubectl describe pvc pvc-sc
Name: pvc-sc
Namespace: default
StorageClass: managed-nfs-storage
Status: Bound
Volume: pvc-1b34c518-cf9b-45d4-a5be-8136cf0c2097
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: fuseim.pri/ifs
volume.kubernetes.io/storage-provisioner: fuseim.pri/ifs
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWX
VolumeMode: Filesystem
Used By: pod-demo-sc
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ExternalExpanding 2m14s volume_expand Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.


报错信息为:没有找到可扩展的插件。 我上官方网站一看,原来人家已经说的很清楚了:Although the feature is enabled by default, a cluster admin must opt-in to allow users to resize their volumes. Kubernetes v1.11 ships with volume expansion support for the following in-tree volume plugins: AWS-EBS, GCE-PD, Azure Disk, Azure File, Glusterfs, Cinder, Portworx, and Ceph RBD.

我们的NFS并不被支持(用NFS做后端存储的小伙伴注意了哈)。

参考资料1
参考资料2
参考资料3