1) el recurso PV
2) el Pod que hace un claim al PV.
vamos por el primer recurso, el PV. Para esto lo montaremos en un nfs.
$ ssh root@$SERVER-IP
- Create a directory named
/var/tmp/share
where only the user and group nfsnobody have any permissions.
# mkdir -p /var/tmp/share # chown nfsnobody:nfsnobody /var/tmp/share # chmod 700 /var/tmp/share
- Under
/etc/
, create an exports file setting the location of your NFS directory.
Create a new file under /etc/exports.d/$NAME with the following contents:/var/tmp/share *(rw,async,all_squash)
- Export the folder so that the OpenShift node can access the storage. If using SELinux, make sure that SELinux permits usage of NFS on both the NFS server and OpenShift node1.
# exportfs -aIf using SELinux:# setsebool virt_use_nfs 1
- Verify that node1 has access to the NFS by manually mounting the newly created folder on the
/mnt
directory.
y si todo esto esta bien, generamos el yaml del pv.$ ssh root@$NODE-1-IP # mount -t nfs $NFS-SERVER-IP:/var/tmp/share /mnt
Creamos un archivo, por ejemplo, pv-nfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: ex280-data
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
path: /var/tmp/share
server: $NFS-LOCATION
persistentVolumeReclaimPolicy: Retain
y luego corremos
oc create -f pv-nfs.yaml
para ver si esta todo ok..
[root@qqmelo1c ~]# oc get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
ex280-data 10Gi RWO Retain Available 11m
[root@qqmelo1c ~]# oc describe pv ex280-data
Name: ex280-data
Labels:
Annotations:
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 10Gi
Node Affinity:
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: qqmelo3c.mylabserver.com
Path: /var/tmp/share
ReadOnly: false
Events:
[root@qqmelo1c ~]#
como vemos, esta todo ok, esperando el pod que reclame a este PV!
No comments:
Post a Comment