You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been trying to mount persistent volumes in Admiralty.
However, I encountered an odd situation.
I have tested the PersistentVolumeClaim without multicluster-scheduler and everything works well.
My situation is the following: I have two clusters running on Google Kubernetes Engine. One source cluster (cluster-cd) and one target cluster (cluster-1).
I have created 2PersistentVolumeClaims on each cluster: cluster-cd -> pvc-cd and pvc-demo cluster-1 -> pvc-1 and pvc-demo
Note that the pvc-demo do not point to the same PersistantVolume. Just the name is the same.
Also, I use the following jobs to test them (extracted from the quick start guide)
I set claimName: pvc-cd (pvc on the source cluster).
The pods stay in Pending status (in source and target clusters) and the pod description in the source cluster context gives me the following error.
Warning FailedScheduling 44s (x3 over 111s) admiralty-proxy 0/4 nodes are available: 1 , 3 node(s) didn't match node selector.
Case 2
I set claimName: pvc-1 (pvc on the target cluster).
The pods stay in Pending status (only in the source cluster this time, pods does not even show up in target cluster).
The pod description in the source cluster context gives me the following error.
Warning FailedScheduling 48s (x3 over 118s) admiralty-proxy persistentvolumeclaim "pvc-1" not found
Case 3
I set claimName: pvc-demo (pvc that exists on both cluster, but refers to different locations)
In this case, it seems to be working. However, the command echo hello world >> /mnt/data/hello.txtis written
on the pvc of the target clusters.
Conclusion
I understand the behavior in the 3 cases. However, is there a way in Admiralty to use the PersistentVolumeClaims? I am interested in them in a perspective of plugging them on some Argo workflow to produce input and output set of data.
Is there a good way to do that with Admiraly/Argo. Or should I use buckets ?
I have not found specifications regarding that matter in the documentation. But maybe have I overlooked something.
Thanks in advance!
The text was updated successfully, but these errors were encountered:
Hi @simonbonnefoy , PVCs and PVs aren't specifically supported yet. As you saw, you had to copy pvc-demo to the target cluster for scheduling to work (Admiralty didn't make the one in the source cluster "follow"). Then the two pvc-demos gave birth to two independent PVs referring to different Google Persistent Disks.
Would you like them to refer to the same disk? What if the clusters are in different regions? That may confuse the CSI driver.
Would you like them to refer to different disks with data replication? You'd need a 3rd-party CSI driver.
Hello,
I have been trying to mount persistent volumes in Admiralty.
However, I encountered an odd situation.
I have tested the
PersistentVolumeClaim
without multicluster-scheduler and everything works well.My situation is the following: I have two clusters running on Google Kubernetes Engine. One source cluster (cluster-cd) and one target cluster (cluster-1).
I have created 2
PersistentVolumeClaim
s on each cluster:cluster-cd
->pvc-cd
andpvc-demo
cluster-1
->pvc-1
andpvc-demo
Note that the
pvc-demo
do not point to the samePersistantVolume
. Just the name is the same.Also, I use the following jobs to test them (extracted from the quick start guide)
Case 1
I set
claimName: pvc-cd
(pvc on the source cluster).The pods stay in
Pending
status (in source and target clusters) and the pod description in the source cluster context gives me the following error.Warning FailedScheduling 44s (x3 over 111s) admiralty-proxy 0/4 nodes are available: 1 , 3 node(s) didn't match node selector.
Case 2
I set
claimName: pvc-1
(pvc on the target cluster).The pods stay in
Pending
status (only in the source cluster this time, pods does not even show up in target cluster).The pod description in the source cluster context gives me the following error.
Warning FailedScheduling 48s (x3 over 118s) admiralty-proxy persistentvolumeclaim "pvc-1" not found
Case 3
I set
claimName: pvc-demo
(pvc that exists on both cluster, but refers to different locations)In this case, it seems to be working. However, the command
echo hello world >> /mnt/data/hello.txt
is writtenon the pvc of the target clusters.
Conclusion
I understand the behavior in the 3 cases. However, is there a way in Admiralty to use the
PersistentVolumeClaims
? I am interested in them in a perspective of plugging them on some Argo workflow to produce input and output set of data.Is there a good way to do that with Admiraly/Argo. Or should I use buckets ?
I have not found specifications regarding that matter in the documentation. But maybe have I overlooked something.
Thanks in advance!
The text was updated successfully, but these errors were encountered: