Skip to content
This repository has been archived by the owner on Mar 23, 2020. It is now read-only.

Create a storage VLAN as part of installation #4

Open
russellb opened this issue Jul 26, 2019 · 11 comments
Open

Create a storage VLAN as part of installation #4

russellb opened this issue Jul 26, 2019 · 11 comments

Comments

@russellb
Copy link
Member

One KNI environment customization that we need is the creation of a VLAN for storage use.

The servers we're using will have at least two network interfaces. One of these interfaces is used for provisioning. We also need to create a VLAN on the provisioning NIC for storage.

This issue is to discuss the options for accomplishing this and to eventually merge something into install-scripts.

The options I've looked into so far:

  1. Use kubernetes-nmstate -- https://github.com/nmstate/kubernetes-nmstate

  2. Use a MachineConfig resource -- https://github.com/openshift/machine-config-operator/

@russellb
Copy link
Member Author

I successfully prototyped the MachineConfig resource approach, and it seemed to work fine.

First, we must craft a config file to drop into the /etc/NetworkManager/system-connections directory on each host. For my prototype purposes, I used this config file, which configures a VLAN interface with a static IP since this VLAN doesn't actually exist in my test environment, so there will be no DHCP server.

In this case I used a VLAN ID of 20, and the provisioning NIC is ens3.

[connection]
id=vlan-vlan20
type=vlan
interface-name=vlan20
autoconnect=true
[ipv4]
method=manual
addresses=172.5.0.2/24
gateway=172.5.0.1
[vlan]
parent=ens3
id=20

These contents must then be base64 encoded and placed into a MachineConfig resource. Here's the resource I used:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: 00-ocs-vlan
spec:
  config:
    ignition:
      version: 2.2.0
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf-8;base64,W2Nvbm5lY3Rpb25dCmlkPXZsYW4tdmxhbjIwCnR5cGU9dmxhbgppbnRlcmZhY2UtbmFtZT12bGFuMjAKYXV0b2Nvbm5lY3Q9dHJ1ZQpbaXB2NF0KbWV0aG9kPW1hbnVhbAphZGRyZXNzZXM9MTcyLjUuMC4yLzI0CmdhdGV3YXk9MTcyLjUuMC4xClt2bGFuXQpwYXJlbnQ9ZW5zMwppZD0yMAo=
        filesystem: root
        mode: 0600
        path: /etc/NetworkManager/system-connections/ocs-vlan.conf

At installation time, I dropped this yaml file in the ocp/openshift directory after running ocp/openshift-install --dir ocp create manifests.

I then ran ocp/openshift-install --dir ocp create cluster and let the install finish as usual.

The end result was a working cluster with the configured VLAN interface present on each master node.

@e-minguez
Copy link
Member

I believe it would be nice to have all those 'non OCP' stuff deployed as a post installation tasks.

I'm thinking about:

  • Hosting all the required 'post installation' machine-configs in a specific folder in this repo
  • Deploy OCP4 as usual
  • Run the post-installation as
# Disable auto reboot
oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/master

# Apply all requried changes
oc create -f mc/*

# Enable auto reboot
oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/master

The disable/enable is to avoid rebooting the servers (specially baremetal) for every mc.

@russellb
Copy link
Member Author

I believe it would be nice to have all those 'non OCP' stuff deployed as a post installation tasks.

Yes, it could be post-install. I was thinking that injecting them at install time would save a reboot, though.

@e-minguez
Copy link
Member

Another thing to have in mind is that if we use machine-configs, there should be different machine configs per host (as every host would have its own storage IP).

WRT install vs post-install I believe keeping the install phase as 'OCP4 vanilla' as possible then add customization at post-installation would be better, but that's just my thought :) (I see doing it at installation time can save some time...)

@russellb
Copy link
Member Author

Another thing to have in mind is that if we use machine-configs, there should be different machine configs per host (as every host would have its own storage IP).

I figured we'd use DHCP with the ToR switch running a DHCP server for this VLAN. The static IP was just for prototype purposes when I didn't have DHCP running on the VLAN.

@russellb
Copy link
Member Author

russellb commented Jul 29, 2019

I was able to prototype the kubernetes-nmstate state approach to do about the same thing as the MachineConfig prototype.

To deploy kubernetes-nmstate, I needed a few fixes to the docs: nmstate/kubernetes-nmstate#146

I created a file, vlan30-patch.yaml:

spec:
  desiredState:
    interfaces:
      - ipv4:
          address:
          - ip: 172.6.0.2
            prefix-length: 24
          dhcp: false
          enabled: true
        mtu: 1500
        name: vlan30
        state: up
        type: vlan
        vlan:
          base-iface: ens3
          id: 30

Then I applied it as a patch against the node network state of master-0:

oc patch --type merge nodenetworkstate master-0 -p "$(cat vlan30-patch.yaml)"

I was able to ssh in to master-0 to verify that my new VLAN interface had been created.

In the NetworkConfigState for master-0, I can also see my new vlan30 interface listed in the list of current interfaces in the status field: status.currentState.interfaces

@qinqon
Copy link

qinqon commented Jul 29, 2019

@russellb awesome, is very nice see people using the kubernetes-nmstate for their purposes, one improve you can do is use NodeNeworkConfigurationPolicy instead of NodeNetworkState, it allow you to apply desiredState on all cluster or specific node depending on labels on them.

Also looks like we can make OCS wait for kubernetes-nmstate somehow as stated by @DHELLMAN

@russellb
Copy link
Member Author

@russellb awesome, is very nice see people using the kubernetes-nmstate for their purposes, one improve you can do is use NodeNeworkConfigurationPolicy instead of NodeNetworkState, it allow you to apply desiredState on all cluster or specific node depending on labels on them.

Thanks. A policy that applies a change across multiple nodes sounds like a better fit. I just followed the examples in the repo which showed patching NodeNetworkState. It looks like the docs could benefit from some examples using NodeNetworkConfigurationPolicy.

Also looks like we can make OCS wait for kubernetes-nmstate somehow as stated by @DHELLMAN

Yes - as long as there is a way to determine when the configuration has been successfully applied. I figured with NodeNetworkState the code would wait until its configured interface shows up in the status section. Is there a roll-up of status in NodeNetworkConfigurationPolicy? or would code have to look at every NodeNetworkState manually to figure out when its config policy had been applied?

@russellb
Copy link
Member Author

Some rough thoughts after prototyping both approaches:

  1. Our need for this specific case is pretty simple network configuration, and the MachineConfig resource seems like a fine way to start. Because of how these are applied, we know it will be in place before OCS pods start. I suggest we start here by creating an appropriate MachineConfig resource in this install-scripts repository.

  2. Using kubernetes-nmstate does provde a nicer declarative higher level interface for host network management, so it seems like a good long term target. There is a bit more work to be done to ensure that the applied config is active before starting OCS. Ideally this would be built into a top level OCS operator that knows that the network configuration it depends on must be applied before it can proceed to start the components that depend on that network. I don't expect that to all get lined up quickly, and is perhaps a more realistic target for a future release.

@markmc
Copy link
Member

markmc commented Aug 6, 2019

I believe it would be nice to have all those 'non OCP' stuff deployed as a post installation tasks.

Yes, it could be post-install. I was thinking that injecting them at install time would save a reboot, though.

Yeah, I'd prefer the post-install route too

I'd prefer to avoid doing the create manifests thing at all - there are plenty of things we could do to the manifests that wouldn't be considered supportable, so if we can avoid even giving ourselves that option ...

@qinqon
Copy link

qinqon commented Sep 10, 2019

Yes - as long as there is a way to determine when the configuration has been successfully applied. I figured with NodeNetworkState the code would wait until its configured interface shows up in the status section. Is there a roll-up of status in NodeNetworkConfigurationPolicy? or would code have to look at every NodeNetworkState manually to figure out when its config policy had been applied?

@russellb We are implementing conditions right now for policies so it would be easier to check if desiredState has being applied, nmstate/kubernetes-nmstate#154

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants