Kubernetes入门实验:statefulset

作者注:本文仅为笔者学习记录,不具任何参考意义。

k8s statefulset 实验。
注:本文为笔者实验记录,非教程,另会不定时更新。

环境

1
2
3
4
5
# kubectl get node
NAME STATUS ROLES AGE VERSION
edge-node Ready <none> 15m v1.17.0
edge-node2 Ready <none> 16m v1.17.0
ubuntu Ready master 67d v1.17.0

statefulset

技术总结

创建statefulset,验证pvc存储。
master节点开放3个nfs目录。可读写。

实验

简单示例

1、pv.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv1
labels:
storage: nfs
spec:
accessModes: ["ReadWriteOnce", "ReadWriteMany", "ReadOnlyMany"] # 支持多模式,可能好些
#accessModes:
# - ReadWriteMany
capacity:
storage: 200Mi
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain # 值 delete Recycle Retain
nfs:
server: 192.168.0.102
path: /nfs1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv2
labels:
storage: nfs
spec:
capacity:
storage: 100Mi # 5Gi
accessModes: ["ReadWriteOnce", "ReadWriteMany", "ReadOnlyMany"]
nfs:
server: 192.168.0.102
path: /nfs2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv3
labels:
storage: nfs
spec:
capacity:
storage: 100Mi # 5Gi
accessModes: ["ReadWriteOnce", "ReadWriteMany", "ReadOnlyMany"]
nfs:
server: 192.168.0.102
path: /nfs3

2、
nginx-service.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web # statefulset名称
spec:
serviceName: "nginx"
replicas: 3 # by default is 1
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: latelee/lidch
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
#storageClassName: "my-storage-class"
resources:
requests:
storage: 10Mi

注:创建3个pv,因为StatefulSet有3个副本,理论上pv应该多于副本。

查看

创建:

1
2
kubectl apply -f pv.yaml 
nginx-service.yaml

查看:

1
2
3
4
5
kubectl get statefulset
查看详情:
kubectl describe statefulset web
查看pod:
kubectl describe pod web-0

查看pv和pvc:

1
2
3
4
5
6
7
8
9
10
11
# kubectl get pv // 先创建,随后被www-web使用
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv1 200Mi RWO,ROX,RWX Retain Bound default/www-web-2 3m47s
nfs-pv2 100Mi RWO,ROX,RWX Retain Bound default/www-web-0 3m47s
nfs-pv3 100Mi RWO,ROX,RWX Retain Bound default/www-web-1 3m47s

# kubectl get pvc // 注!是自动创建的
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound nfs-pv2 100Mi RWO,ROX,RWX 3m51s
www-web-1 Bound nfs-pv3 100Mi RWO,ROX,RWX 3m44s
www-web-2 Bound nfs-pv1 200Mi RWO,ROX,RWX 3m41s

测试存储

一终端执行kubectl get pod -w -l app=nginx观察状态,另一终端执行kubectl delete pod -l app=nginx,pod被删除后,按顺序重启创建。
查看主机名:

1
2
3
# for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done
web-0
web-1

在k8s中创建名为dns-test的pod,进入容器,如退出,则此pod被删除。

1
kubectl run -it --image latelee/busybox dns-test --restart=Never --rm /bin/sh

执行nslookup命令:nslookup web-XX.nginx。命令及输出:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/ # nslookup web-0.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'web-0.nginx'
/ #
/ # nslookup web-1.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'web-1.nginx'
/ # nslookup web-2.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'web-2.nginx'
/ #

注:为何解决不了web-0.nginx,暂未知。

依次将各自主机名写到网页index.html(注:因此已经挂载了,最终会写到各自nfs目录):

1
for i in 0 1 2; do kubectl exec web-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done

结果pod名称为web-0者,其主机名为web-0,其它类推。
此时pod分布:

1
2
3
4
NAME    READY   STATUS    RESTARTS   AGE    IP             NODE         NOMINATED NODE   READINESS GATES
web-0 1/1 Running 0 6m8s 10.244.4.133 edge-node2 <none> <none>
web-1 1/1 Running 0 6m7s 10.244.1.125 edge-node <none> <none>
web-2 1/1 Running 0 6m5s 10.244.4.134 edge-node2 <none> <none>

此时master主机nfs目录情况:

1
2
3
4
5
6
# cat /nfs1/index.html 
web-2
# cat /nfs2/index.html
web-0
# cat /nfs3/index.html
web-1

删除pod:

1
kubectl delete pod -l app=nginx

等待pod重新启动后。此时分布:

1
2
3
4
NAME    READY   STATUS    RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
web-0 1/1 Running 0 31s 10.244.4.142 edge-node2 <none> <none>
web-1 1/1 Running 0 21s 10.244.1.129 edge-node <none> <none>
web-2 1/1 Running 0 19s 10.244.4.143 edge-node2 <none> <none>

注:调度的节点无变化,但IP变化了。(姑且认为已经调度了)
查看各个pod的页面:

1
2
3
4
5
for i in 0 1 2; do kubectl exec web-$i -- sh -c 'cat /usr/share/nginx/html/index.html'; done
输出为:
web-0
web-1
web-2

再查看nfs目录,没有变化。
结论:文件内容已经存储在磁盘中,pod重启,文件不丢失,而且不会随着调度而变化。注意,nfs目录并不是一一对应的(本例中web-1对应/nfs3目录),但一旦确定了,就不会再改变。保证了内容的一致性。

扩容

扩容至5个:

1
# kubectl scale sts web --replicas=5

此时会pod无法创建成功,挂起状态。因为pv不够。

1
web-3   0/1     Pending   0          42s

在前面pv.yaml基础上添加2个nfs目录,并重启nfs服务。再用kubectl apply -f pv.yaml更新,此时pv情况:

1
2
3
4
5
6
7
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv1 200Mi RWO,ROX,RWX Retain Bound default/www-web-2 19m
nfs-pv2 100Mi RWO,ROX,RWX Retain Bound default/www-web-0 19m
nfs-pv3 100Mi RWO,ROX,RWX Retain Bound default/www-web-1 19m
nfs-pv4 100Mi RWO,ROX,RWX Retain Bound default/www-web-3 8s
nfs-pv5 100Mi RWO,ROX,RWX Retain Available 8s

注:因为是在创建web-3后才创建pv的,所以一旦pv可用,立刻绑定,此刻pv5还没有绑定(稍等片刻即绑定)。

缩容:

1
kubectl patch sts web -p '{"spec":{"replicas":3}}'

此时,web-3和web-4被删除。

升级

1
kubectl set image sts/web nginx=latelee/lidch:1.1

释义:更新sts/web,其容器名为nginx,镜像为latelee/lidch:1.1。
查看:

1
2
3
# kubectl get sts -o wide
NAME READY AGE CONTAINERS IMAGES
web 3/3 69m nginx latelee/lidch:1.1

删除:

1
2
kubectl delete -f nginx-service.yaml
kubectl delete -f pv.yaml

删除statefulset,pvc不会被删除,要手动删除。

问题

1、

1
2
3
error while running "VolumeBinding" filter plugin for pod "web-0": pod has unbound immediate PersistentVolumeClaims

no persistent volumes available for this claim and no storage class is set

原因:pvc没有创建。如副本为3,则会有3个pvc(也有3个pv)。
查看:

1
kubectl get pvc

继续查:

1
2
3
kubectl describe pvc www-web-0
输出:
storageclass.storage.k8s.io "my-storage-class" not found

清除所有,查询pv、pvc无输出:

1
2
kubectl get pvc
kubectl get pv

由于使用模板volumeClaimTemplates,不需要创建pvc,因此创建多个pv即可(本文示例为3个),再创建statefulset,会自动创建pvc。

2、
如果不同的pv使用相同的路径,则会相互影响。如pod1修改的文件会影响到pod2。

3、

1
mount.nfs: access denied by server while mounting 192.168.0.102:/nfs3

1)、目录不存在;2)、没有导出该目录。

参考