Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DCN] Support Manila Shares with Local Ceph Clusters in DCN #467

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

lkuchlan
Copy link

This update enables deploying three Manila shares, each connected to its local Ceph cluster within the same availability zone (AZ). In DCN, each "site" is configured to use its respective local Ceph cluster.

@openshift-ci openshift-ci bot requested review from fultonj and karelyatin December 18, 2024 10:23
Copy link

openshift-ci bot commented Dec 18, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: lkuchlan
Once this PR has been reviewed and has the lgtm label, please assign abays for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

examples/dt/dcn/service-values.yaml Outdated Show resolved Hide resolved
examples/dt/dcn/service-values.yaml Outdated Show resolved Hide resolved
examples/dt/dcn/service-values.yaml Outdated Show resolved Hide resolved
examples/dt/dcn/service-values.yaml Outdated Show resolved Hide resolved
@lkuchlan lkuchlan force-pushed the manila_dcn branch 2 times, most recently from b8edfe4 to 6af42f9 Compare December 18, 2024 12:51
manilaAPI:
customServiceConfig: |
[DEFAULT]
enabled_share_protocols=nfs,cephfs
manilaScheduler:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as this is a configMap that points to the default control plane provided in lib/controlplane, I realizedwe do not need manilaScheduler here. Can you remove L204 andL205?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

This update enables deploying three Manila shares, each connected
to its local Ceph cluster within the same availability zone (AZ).
In DCN, each "site" is configured to use its respective local Ceph
cluster.
@fultonj
Copy link
Contributor

fultonj commented Dec 19, 2024

The OpenStackControlPlane generated by this change produces the desired
change as far as I can tell. For example compare the cinder [1] and manila
[2] output.

[1]

 392   │       cinderVolumes:
 393   │         az0:
 394   │           customServiceConfig: |
 395   │             [DEFAULT]
 396   │             enabled_backends = ceph
 397   │             glance_api_servers = https://glance-az0-internal.openstack.svc:9292
 398   │             [ceph]
 399   │             volume_backend_name = ceph
 400   │             volume_driver = cinder.volume.drivers.rbd.RBDDriver
 401   │             rbd_ceph_conf = /etc/ceph/az0.conf
 402   │             rbd_user = openstack
 403   │             rbd_pool = volumes
 404   │             rbd_flatten_volume_from_snapshot = False
 405   │             rbd_secret_uuid = CHANGEME
 406   │             rbd_cluster_name = az0
 407   │             backend_availability_zone = az0
 408   │         az1:
 409   │           customServiceConfig: |
 410   │             [DEFAULT]
 411   │             enabled_backends = ceph
 412   │             glance_api_servers = https://glance-az1-internal.openstack.svc:9292
 413   │             [ceph]
 414   │             volume_backend_name = ceph
 415   │             volume_driver = cinder.volume.drivers.rbd.RBDDriver
 416   │             rbd_ceph_conf = /etc/ceph/az1.conf
 417   │             rbd_user = openstack
 418   │             rbd_pool = volumes
 419   │             rbd_flatten_volume_from_snapshot = False
 420   │             rbd_secret_uuid = CHANGEME
 421   │             rbd_cluster_name = az1
 422   │             backend_availability_zone = az1
 423   │         az2:
 424   │           customServiceConfig: |
 425   │             [DEFAULT]
 426   │             enabled_backends = ceph
 427   │             glance_api_servers = https://glance-az2-internal.openstack.svc:9292
 428   │             [ceph]
 429   │             volume_backend_name = ceph
 430   │             volume_driver = cinder.volume.drivers.rbd.RBDDriver
 431   │             rbd_ceph_conf = /etc/ceph/az2.conf
 432   │             rbd_user = openstack
 433   │             rbd_pool = volumes
 434   │             rbd_flatten_volume_from_snapshot = False
 435   │             rbd_secret_uuid = CHANGEME
 436   │             rbd_cluster_name = az2
 437   │             backend_availability_zone = az2
 438   │       customServiceConfig: |
 439   │         [DEFAULT]
 440   │         storage_availability_zone = az0
 441   │       databaseInstance: openstack
 442   │       preserveJobs: false
 443   │       secret: osp-secret
 444   │     uniquePodNames: false

[2]

 698   │   manila:
 699   │     apiOverride:
 700   │       route:
 701   │         haproxy.router.openshift.io/timeout: 60s
 702   │     enabled: true
 703   │     template:
 704   │       manilaAPI:
 705   │         customServiceConfig: |
 706   │           [DEFAULT]
 707   │           enabled_share_protocols=nfs,cephfs
 708   │         networkAttachments:
 709   │         - internalapi
 710   │         override:
 711   │           service:
 712   │             internal:
 713   │               metadata:
 714   │                 annotations:
 715   │                   metallb.universe.tf/address-pool: internalapi
 716   │                   metallb.universe.tf/allow-shared-ip: internalapi
 717   │                   metallb.universe.tf/loadBalancerIPs: 172.17.0.80
 718   │               spec:
 719   │                 type: LoadBalancer
 720   │         replicas: 1
 721   │       manilaScheduler:
 722   │         replicas: 1
 723   │       manilaShares:
 724   │         az0:
 725   │           customServiceConfig: |
 726   │             [DEFAULT]
 727   │             enabled_share_backends = cephfs
 728   │             enabled_share_protocols = cephfs
 729   │             [cephfs]
 730   │             driver_handles_share_servers = False
 731   │             share_backend_name = cephfs
 732   │             share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
 733   │             cephfs_conf_path = /etc/ceph/az0.conf
 734   │             cephfs_cluster_name = ceph
 735   │             cephfs_auth_id=openstack
 736   │             cephfs_volume_mode = 0755
 737   │             cephfs_protocol_helper_type = CEPHFS
 738   │             storage_availability_zone = az0
 739   │         az1:
 740   │           customServiceConfig: |
 741   │             [DEFAULT]
 742   │             enabled_share_backends = cephfs
 743   │             enabled_share_protocols = cephfs
 744   │             [cephfs]
 745   │             driver_handles_share_servers = False
 746   │             share_backend_name = cephfs
 747   │             share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
 748   │             cephfs_conf_path = /etc/ceph/az1.conf
 749   │             cephfs_cluster_name = ceph
 750   │             cephfs_auth_id=openstack
 751   │             cephfs_volume_mode = 0755
 752   │             cephfs_protocol_helper_type = CEPHFS
 753   │             storage_availability_zone = az1
 754   │         az2:
 755   │           customServiceConfig: |
 756   │             [DEFAULT]
 757   │             enabled_share_backends = cephfs
 758   │             enabled_share_protocols = cephfs
 759   │             [cephfs]
 760   │             driver_handles_share_servers = False
 761   │             share_backend_name = cephfs
 762   │             share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
 763   │             cephfs_conf_path = /etc/ceph/az2.conf
 764   │             cephfs_cluster_name = ceph
 765   │             cephfs_auth_id=openstack
 766   │             cephfs_volume_mode = 0755
 767   │             cephfs_protocol_helper_type = CEPHFS
 768   │             storage_availability_zone = az2
 769   │       preserveJobs: false

@fultonj
Copy link
Contributor

fultonj commented Dec 19, 2024

In general I'm +1 to merge but let's confirm by a downstream job. After Liron confirms I say we merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants