-
Notifications
You must be signed in to change notification settings - Fork 20
Create a storage VLAN as part of installation #4
Comments
I successfully prototyped the First, we must craft a config file to drop into the In this case I used a VLAN ID of 20, and the provisioning NIC is
These contents must then be base64 encoded and placed into a apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 00-ocs-vlan
spec:
config:
ignition:
version: 2.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,W2Nvbm5lY3Rpb25dCmlkPXZsYW4tdmxhbjIwCnR5cGU9dmxhbgppbnRlcmZhY2UtbmFtZT12bGFuMjAKYXV0b2Nvbm5lY3Q9dHJ1ZQpbaXB2NF0KbWV0aG9kPW1hbnVhbAphZGRyZXNzZXM9MTcyLjUuMC4yLzI0CmdhdGV3YXk9MTcyLjUuMC4xClt2bGFuXQpwYXJlbnQ9ZW5zMwppZD0yMAo=
filesystem: root
mode: 0600
path: /etc/NetworkManager/system-connections/ocs-vlan.conf At installation time, I dropped this yaml file in the I then ran The end result was a working cluster with the configured VLAN interface present on each master node. |
I believe it would be nice to have all those 'non OCP' stuff deployed as a post installation tasks. I'm thinking about:
The disable/enable is to avoid rebooting the servers (specially baremetal) for every mc. |
Yes, it could be post-install. I was thinking that injecting them at install time would save a reboot, though. |
Another thing to have in mind is that if we use machine-configs, there should be different machine configs per host (as every host would have its own storage IP). WRT install vs post-install I believe keeping the install phase as 'OCP4 vanilla' as possible then add customization at post-installation would be better, but that's just my thought :) (I see doing it at installation time can save some time...) |
I figured we'd use DHCP with the ToR switch running a DHCP server for this VLAN. The static IP was just for prototype purposes when I didn't have DHCP running on the VLAN. |
I was able to prototype the To deploy I created a file, spec:
desiredState:
interfaces:
- ipv4:
address:
- ip: 172.6.0.2
prefix-length: 24
dhcp: false
enabled: true
mtu: 1500
name: vlan30
state: up
type: vlan
vlan:
base-iface: ens3
id: 30 Then I applied it as a patch against the node network state of
I was able to ssh in to In the |
@russellb awesome, is very nice see people using the kubernetes-nmstate for their purposes, one improve you can do is use NodeNeworkConfigurationPolicy instead of NodeNetworkState, it allow you to apply desiredState on all cluster or specific node depending on labels on them. Also looks like we can make OCS wait for kubernetes-nmstate somehow as stated by @DHELLMAN |
Thanks. A policy that applies a change across multiple nodes sounds like a better fit. I just followed the examples in the repo which showed patching
Yes - as long as there is a way to determine when the configuration has been successfully applied. I figured with |
Some rough thoughts after prototyping both approaches:
|
Yeah, I'd prefer the post-install route too I'd prefer to avoid doing the |
@russellb We are implementing conditions right now for policies so it would be easier to check if desiredState has being applied, nmstate/kubernetes-nmstate#154 |
One KNI environment customization that we need is the creation of a VLAN for storage use.
The servers we're using will have at least two network interfaces. One of these interfaces is used for provisioning. We also need to create a VLAN on the provisioning NIC for storage.
This issue is to discuss the options for accomplishing this and to eventually merge something into
install-scripts
.The options I've looked into so far:
Use
kubernetes-nmstate
-- https://github.com/nmstate/kubernetes-nmstateUse a
MachineConfig
resource -- https://github.com/openshift/machine-config-operator/The text was updated successfully, but these errors were encountered: