K8S的持久化
K8S的持久化
K8s存储
K8s存储主要分为?
临时存储、半持久化存储、持久化存储
emptyDir
一般来说emptydir的用途都是用来充当临时存储空间,例如一些不需要数据持久化的微服务,我们都可以用emptydir来当做微服务pod的存储方案。
什么是emptyDir
当pod的存储方案设定为emptydir的时候,pod启动时,就会在pod所在节点的磁盘空间开辟出一块空卷,最开始里面是什么都没有的,pod启动后容器产生的数据会存放到那个空卷中。空卷变成了一个临时卷供pod内的容器读取和写入数据,一旦pod容器消失,节点上创建的这个临时卷就会随着pod的销毁而销毁。
emptyDir的用途
充当临时存储空间,当pod内容器产生的数据不需要做持久化存储的时候用emptydir
设置检查点以从崩溃事件中恢复未执行完毕的长计算
HostPath
3.1 什么是HostPath
hostPath类型则是映射node文件系统中的文件或者目录到pod里。在使用hostPath类型的存储卷时,也可以设置type字段,支持的类型有文件、目录等。
HostPath就相当于docker中的-v 目录映射,只不过在k8s中的时候,pod会漂移,当pod漂移到其他node节点的时候,pod不会跨节点的去读取目录。所以说HostPath只能算一种半持久化的存储方式。
HostPath的用途
运行的容器需要访问Docker内部结构时,可以使用hostPath映射服务器目录到容器
PV、PVC
PV是:是k8s集群的外部存储系统,一般是设定好的存储空间(文件系统中的一个目录)PV是生产者
PVC是:如果应用需要用到持久化的时候,可以直接向PV申请空间。PVC是消费者
PV和PVC是一一对应关系,当有PV被某个PVC所占用时,会显示banding,其它PVC不能再使用绑定过的PV。但是PVC若没有找到合适的PV时,则会处于pending状态。PVC一旦绑定PV,就相当于是一个存储卷,此时PVC可以被多个Pod所使用。(PVC支不支持被多个Pod访问,取决于访问模型accessMode的定义)。
例子
emptyDir
创建yaml文件
vim emptydir.yaml
===================== 文件内容如下================================
kind: Pod
apiVersion: v1
metadata:
name: emptydir-consumer
spec:
volumes:
- name: shared-volume
emptyDir: {}
containers:
- name: emptydir
image: busybox
volumeMounts:
- mountPath: /empty_dir
name: shared-volume
args:
- /bin/sh
- -c
- echo "hello world" > /empty_dir/hello.txt; sleep 30000
- name: consumer
image: busybox
volumeMounts:
- mountPath: /consumer_dir
name: shared-volume
args:
- /bin/sh
- -c
- cat /consumer_dir/hello.txt ; sleep 30000
[root@master yaml]# kubectl apply -f emptydir.yaml
pod/emptydir-consumer created查看容器日志
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
emptydir 2/2 Running 0 2m18s
[root@master yaml]# kubectl logs emptydir-consumer emptydir
consumer emptydir
[root@master yaml]# kubectl logs emptydir-consumer consumer
hello world验证emptyDir原理
查看运行在哪一个节点上
[root@master yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
emptydir-consumer 2/2 Running 0 87s 10.244.1.3 node02 <none> <none>在node02节点查看容器的详细信息
[root@node02 ~]# docker inspect 04ee0cd5f3c6
......
"Mounts": [
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/7d0f9cf7-8673-442e-b3c1-46afe6f4fa18/volumes/kubernetes.io~empty-dir/shared-volume",
"Destination": "/consumer_dir",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
......
[root@node02 ~]# cd /var/lib/kubelet/pods/7d0f9cf7-8673-442e-b3c1-46afe6f4fa18/volumes/kubernetes.io~empty-dir/shared-volume
[root@node02 shared-volume]# ls
hello.txt删除pod,查看节点文件是否存在?
[root@master yaml]# ls
emptydir.yaml
[root@master yaml]# kubectl delete -f emptydir.yaml
pod "emptydir-consumer" deletednode02
[root@node02 ~]# cd /var/lib/kubelet/pods/7d0f9cf7-8673-442e-b3c1-46afe6f4fa18/volumes/kubernetes.io~empty-dir/shared-volume
-bash: cd: /var/lib/kubelet/pods/7d0f9cf7-8673-442e-b3c1-46afe6f4fa18/volumes/kubernetes.io~empty-dir/shared-volume: 没有那个文件或目录HostPath(类似docker -v)
创建Yaml文件
[root@master yaml]# mkdir -p /data/hostpath
[root@master yaml]# vim hostpath.yaml
kind: Pod
apiVersion: v1
metadata:
name: pod
spec:
volumes:
- name: share-volume
hostPath:
path: "/data/hostpath"
containers:
- name: httpd
image: httpd
volumeMounts:
- mountPath: /usr/share/nginx/html
name: share-volume
args:
- /bin/bash
- -c
- echo "hello httpd" > /usr/share/nginx/html/index.html; sleep 30000
[root@master yaml]# kubectl apply -f hostpath.yaml
pod/hostpath created查看pod
查看pod在哪个节点上创建的
[root@master yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod 1/1 Running 0 2m51s 10.244.2.4 node01 <none> <none>在node01节点查看是否有映射目录文件
[root@node01 ~]# ls /data/hostpath/
hello.txt index.html验证HostPath
删除pod
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod 1/1 Running 0 5m9s
[root@master yaml]# kubectl delete pod pod
pod "pod" deletednode01节点查看映射文件是否存在
[root@node01 ~]# ls /data/hostpath/
hello.txt index.html基于NFS创建PV、PVC
| kmaster | kworker1 | kworker2 | NFS |
|---|---|---|---|
| 192.168.3.10 | 192.168.3.11 | 192.168.3.12 | 192.168.3.243 |
安装NFS
PS:注意nfs是每一台服务器都要安装的。
[root@nfs ~]# yum -y install nfs-utils rpcbind
[root@nfs ~]# mkdir /nfsdata
[root@nfs ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@nfs ~]# systemctl start nfs
[root@nfs ~]# systemctl start rpcbind
[root@nfs ~]# systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@nfs ~]# systemctl enable rpcbind
[root@nfs ~]# showmount -e
Export list for nfs:
/nfsdata *创建PV与NFS绑定
[root@master yaml]# vim pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfsdata/pv1
server: 192.168.3.243
[root@master yaml]# kubectl apply -f pv.yaml
persistentvolume/pv createdPS:在nfs服务器创建nfs目录
[root@nfs ~]# cd /nfsdata/
[root@nfs nfsdata]# ls
[root@nfs nfsdata]# mkdir pv1PV所支持的访问模式:
- ReadWriteOnce: PV能以read-write的模式mount到单个节点。
- ReadOnlyMany: PV能以read-only 的模式mount到多个节点。
- ReadWriteMany: PV能以read-write的模式Mount到多个节点。
创建PVC与PV关联
[root@master yaml]# vim pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
storageClassName: nfs
[root@master yaml]# kubectl apply -f pvc.yaml
persistentvolumeclaim/pvc unchanged查看
[root@master yaml]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 1Gi RWO Recycle Bound default/pvc nfs 4m47s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc Bound pv 1Gi RWO nfs 2m24sPS:STATUS为Bound说明这个pvc已经与pv绑定了
创建Pod引用PVC
[root@master yaml]# vim pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: pod
spec:
volumes:
- name: share-data
persistentVolumeClaim:
vlaimName: pvc
containers:
- name: pod
image: busybox
args:
- /bin/sh
- -c
- sleep 30000
volumeMounts:
- mountPath: "/data"
name: share-data
[root@master yaml]# kubectl apply -f pod.yaml
pod/pod created
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod 1/1 Running 0 5s验证存储是否正常
NFS
[root@nfs pv1]# pwd
/nfsdata/pv1
[root@nfs pv1]# echo "hello persistenVolume" > test.txtMaster
PS:/data/test.txt为容器内挂在存储的目录
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod 1/1 Running 0 100s
[root@master yaml]# kubectl exec pod cat /data/test.txt
hello persistenVolumePV的空间回收
spec:
......
persistentVolumeReclaimPolicy: Recycle
......PV空间的回收策略
- Recycle:会清除数据,自动回收。
- Retain:需要手动清理回收。
- Delete:云存储专用的回收空间使用命令。
[root@master yaml]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv 1Gi RWO Recycle Bound default/pvc nfs 23m验证pv回收策略
删除pod、pvc资源
[root@master yaml]# kubectl delete pod pod
pod "pod" deleted
[root@master yaml]# kubectl delete pvc pvc
persistentvolumeclaim "pvc" deleted查看PV的释放过程
Bound —关联 Recycle—释放 Available—可用
[root@master yaml]# kubectl get pv -w
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON
pv 1Gi RWO Recycle Bound default/pvc nfs 33m
pv 1Gi RWO Recycle Released default/pvc nfs 33m
pv 1Gi RWO Recycle Released nfs 33m
pv 1Gi RWO Recycle Available nfs 33mNFS查看
[root@nfs pv1]# pwd
/nfsdata/pv1
[root@nfs pv1]# ls验证Retain策略
更改pc的yaml文件
[root@master yaml]# vim pv.yaml
......
persistentVolumeReclaimPolicy: Retain
......再次运行pv和pod的yaml文件
[root@master yaml]# kubectl apply -f pv.yaml
persistentvolume/pv created
[root@master yaml]# kubectl apply -f pod.yaml
pod/pod created创建对应的资源,再尝试删除PVC,和Pod,验证PV目录下,数据是否还会存在
master
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod 1/1 Running 0 66s
[root@master yaml]# kubectl exec pod touch /data/test.txtnfs
[root@nfs pv1]# pwd
/nfsdata/pv1
[root@nfs pv1]# ls
test.txt再次删除Pod,PVC
[root@master yaml]# kubectl delete pod pod
pod "pod" deleted
[root@master yaml]# kubectl delete pvc pvc
persistentvolumeclaim "pvc" deleted验证PV目录下存放的数据
[root@nfs pv1]# pwd
/nfsdata/pv1
[root@nfs pv1]# ls
test.txt自动创建PV、PVC
开启NFS
此步骤根据3.1步骤完成
开启rbac权限
RBAC基于角色的访问控制–全拼Role-Based Access Control
[root@master yaml]# vim rbac-rolebind.yaml
kind: Namespace
apiVersion: v1
metadata:
name: test
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
namespace: test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: test #如没有名称空间需要添加这个default默认否则报错
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
[root@master yaml]# kubectl apply -f rbac-rolebind.yaml
namespace/test created
serviceaccount/nfs-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created创建nfs的pod资源
[root@master yaml]# vim nfs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: test
spec:
replicas: 1
strategy:
type: Recreate #重置
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner #指定账户
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME #容器内置变量
value: test-www #变量名字
- name: NFS_SERVER
value: 192.168.1.43
- name: NFS_PATH #指定NFS共享目录
value: /nfsdata
volumes: #以下为指定挂载到容器内的NFS路径和IP
- name: nfs-client-root
nfs:
server: 192.168.1.43
path: /nfsdata
[root@master yaml]# kubectl apply -f nfs-deployment.yaml
deployment.extensions/nfs-client-provisioner createdPS:nfs-client-provisioner这个镜像的作用,它通过k8s集群内置的NFS驱动,挂载远端的NFS服务器到本地目录,然后将自身作为storageprovisioner,然后关联到storageclass资源。
创建storageclass资源
[root@master yaml]# vim storageclass.yaml
kind: StorageClass
metadata:
name: storageclass
namespace: test
provisioner: test-www ##与nfs的deployment资源的env环境变量value值相同
reclaimPolicy: Retain #回收策略
[root@master yaml]# kubectl apply -f storageclass.yaml
storageclass.storage.k8s.io/storageclass created创建Pod资源
PS:在pod资源中添加volumeClaimTemplate字段,实现自动创建pvc服务
[root@master yaml]# vim mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-svc
namespace: test
labels:
app: mysql-svc
spec:
type: NodePort
ports:
- name: mysql
port: 3306
selector:
app: mysql-pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-statefulset
namespace: test
spec:
serviceName: mysql-svc
replicas: 1
selector:
matchLabels:
app: mysql-pod
template:
metadata:
labels:
app: mysql-pod
spec:
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: 123.com
volumeMounts:
- name: share-mysql
mountPath: /var/lib/mysql
volumeClaimTemplates: #这个字段会自动执行创建PVC
- metadata:
name: share-mysql
annotations: #这是是指定storageclass,名称要和storageclass设置的一样一致
volume.beta.kubernetes.io/storage-class: storageclass
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
[root@master yaml]# kubectl apply -f mysql.yaml
service/mysql-svc created
statefulset.apps/mysql-statefulset created查看pod、pv、pvc
[root@master yaml]# kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
mysql-statefulset-0 1/1 Running 0 6m9s
[root@master yaml]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7 100Mi RWO Delete Bound test/share-mysql-mysql-statefulset-0 storageclass 2m31s
[root@master yaml]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
share-mysql-mysql-statefulset-0 Bound pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7 100Mi RWO storageclass 7m28s查看是否有持久化目录
[root@nfs nfsdata]# pwd
/nfsdata
[root@nfs nfsdata]# ls
test-share-mysql-mysql-statefulset-0-pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7验证数据存储
master
[root@master yaml]# kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
mysql-statefulset-0 1/1 Running 0 11m
[root@master yaml]# kubectl exec -it -n test mysql-statefulset-0 bash
root@mysql-statefulset-0:/# mysql -u root -p123.com
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.32 MySQL Community Server (GPL)
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create database test;
Query OK, 1 row affected (0.10 sec)nfs
[root@nfs test-share-mysql-mysql-statefulset-0-pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7]# pwd
/nfsdata/test-share-mysql-mysql-statefulset-0-pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7
[root@nfs test-share-mysql-mysql-statefulset-0-pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7]# ls
auto.cnf client-cert.pem ibdata1 ibtmp1 private_key.pem server-key.pem
ca-key.pem client-key.pem ib_logfile0 mysql public_key.pem sys
ca.pem ib_buffer_pool ib_logfile1 performance_schema server-cert.pem test删除pod资源,重新创建之后数据是否存在
[root@master yaml]# kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
mysql-statefulset-0 1/1 Running 0 16m
[root@master yaml]# kubectl delete pod -n test mysql-statefulset-0
pod "mysql-statefulset-0" deleted
root@master yaml]# kubectl get pod -n test -w
NAME READY STATUS RESTARTS AGE
mysql-statefulset-0 1/1 Terminating 0 49s
mysql-statefulset-0 0/1 Terminating 0 51s
mysql-statefulset-0 0/1 Terminating 0 52s
mysql-statefulset-0 0/1 Terminating 0 52s
mysql-statefulset-0 0/1 Pending 0 0s
mysql-statefulset-0 0/1 Pending 0 0s
mysql-statefulset-0 0/1 ContainerCreating 0 0s
mysql-statefulset-0 1/1 Running 0 1s再次登录查看数据是否存在
[root@master yaml]# kubectl exec -it -n test mysql-statefulset-0 bash
root@mysql-statefulset-0:/# mysql -u root -p123.com
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.32 MySQL Community Server (GPL)
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
+--------------------+
5 rows in set (0.01 sec)关键知识点
- 部署类主题的核心不是“装成功”,而是“稳定运行、可排障、可回滚”。
- 同一个服务通常至少要关注版本、目录、端口、权限、数据、日志和备份。
- Linux 问题经常跨越系统层、网络层、服务层和应用层。
- Kubernetes 主题必须同时看资源对象、调度行为、网络暴露和配置分发。
项目落地视角
- 把安装步骤补成可重复执行的清单,必要时写成脚本或配置文件。
- 把配置目录、数据目录、日志目录和挂载点明确拆开。
- 上线前检查防火墙、SELinux、时区、磁盘、系统服务和健康检查。
- 上线前检查镜像、命名空间、探针、资源限制、Service/Ingress 和配置来源。
常见误区
- 使用 latest 或未固定版本,导致环境不可复现。
- 只验证启动成功,不验证持久化、开机自启和故障恢复。
- 遇到问题先改配置而不是先看日志和依赖链路。
- 只会 apply YAML,不理解对象之间的依赖关系。
进阶路线
- 继续补齐 systemd、性能监控、安全加固和备份恢复。
- 把单机操作升级成 Docker、Kubernetes 或 IaC 方案。
- 建立标准化运维手册,包括巡检、扩容、回滚和灾备演练。
- 继续补齐调度、网络策略、存储、GitOps 和平台工程能力。
适用场景
- 当你准备把《K8S的持久化》真正落到项目里时,最适合先在一个独立模块或最小样例里验证关键路径。
- 适合单机环境初始化、中间件快速搭建、测试环境验证和生产部署前准备。
- 当服务稳定性依赖端口、权限、目录、网络和系统参数时,这类主题会直接影响成败。
落地建议
- 固定版本号与镜像标签,避免“latest”带来的不可预期变化。
- 把配置、数据、日志目录拆开管理,并记录恢复步骤。
- 上线前确认端口、防火墙、SELinux、时区和磁盘空间。
排错清单
- 先查 systemctl、容器日志和应用日志,确认失败发生在哪一层。
- 检查端口占用、目录权限、挂载路径和网络连通性。
- 如果是新环境问题,优先对比与已知正常环境的差异。
复盘问题
- 如果把《K8S的持久化》放进你的当前项目,最先要验证的输入、输出和失败路径分别是什么?
- 《K8S的持久化》最容易在什么规模、什么边界条件下暴露问题?你会用什么指标或日志去确认?
- 相比默认实现或替代方案,采用《K8S的持久化》最大的收益和代价分别是什么?
