-
Notifications
You must be signed in to change notification settings - Fork 40
Deploy VM Hosted Repo
NOTE: We recommend not using auto_deploy_vm
CLI command for deployments as it is not actively maintained anymore.
Checklist:
- Please create VM(s) with at least 4 CPUs and 8GB of RAM.
- For single-node VM deployment, ensure the VM is created with 2+ attached disks.
OR - For multi-node VM deployment, ensure the VM is created with 8+ attached disks.
- Do you see the devices on execution of this command: lsblk ?
- Do the systems on your setup have valid hostnames, are the hostnames accessible: ping ?
- Do you have IPs' assigned to all NICs eth0, eth1 and eth2?
- Identify primary node and run below commands on primary node
NOTE: For single-node VM, the VM node itself is treated as primary node.
-
Set root user password on all nodes:
sudo passwd root
-
Install provisioner api (NOTE: To be run on all 3 nodes)
-
Set repository URL
export CORTX_RELEASE_REPO="<URL to Cortx R2 stack release repo>"
-
Install Provisioner API and requisite packages
yum install -y yum-utils yum-config-manager --add-repo "${CORTX_RELEASE_REPO}/3rd_party/" yum-config-manager --add-repo "${CORTX_RELEASE_REPO}/cortx_iso/"
-
Run following command (this is one command starting from cat to EOF) to create /etc/pip.conf
cat <<EOF >/etc/pip.conf [global] timeout: 60 index-url: $CORTX_RELEASE_REPO/python_deps/ trusted-host: $(echo $CORTX_RELEASE_REPO | awk -F '/' '{print $3}') EOF
-
Cortx Pre-requisites
yum install --nogpgcheck -y java-1.8.0-openjdk-headless yum install --nogpgcheck -y python3 cortx-prereq sshpass
-
Pre-reqs for Provisioner
yum install --nogpgcheck -y python36-m2crypto salt salt-master salt-minion
-
Provisioner API
yum install --nogpgcheck -y python36-cortx-prvsnr
-
Cleanup temporary repos
rm -rf /etc/yum.repos.d/*3rd_party*.repo rm -rf /etc/yum.repos.d/*cortx_iso*.repo yum clean all rm -rf /var/cache/yum/ rm -rf /etc/pip.conf
-
-
Verify provisioner version (0.36.0 and above)
provisioner --version
-
Create config.ini file to some location:
IMPORTANT NOTE: Please check every details in this file correctly according to your node.
Verify interface names are correct as per your nodeUpdate required details in
~/config.ini
use sample config.inivi ~/config.ini
[cluster] mgmt_vip= [srvnode_default] network.data.private_interfaces=eth3,eth4 network.data.public_interfaces=eth1,eth2 network.mgmt.interfaces=eth0 bmc.user=None bmc.secret=None storage.cvg.0.data_devices=/dev/sdc storage.cvg.0.metadata_devices=/dev/sdb network.data.private_ip=None storage.durability.sns.data=1 storage.durability.sns.parity=0 storage.durability.sns.spare=0 [srvnode-1] hostname=srvnode-1.localdomain roles=primary,openldap_server,kafka_server [enclosure_default] type=virtual controller.type=virtual [enclosure-1]
Note: Find the devices on each node separately using the command provided below, to fill into respective config.ini sections.
Complete list of attached devices:
device_list=$(lsblk -nd -o NAME -e 11|grep -v sda|sed 's|sd|/dev/sd|g'|paste -s -d, -)
Values for storage.cvg.0.metadata_devices:
echo ${device_list%%,*}
Values for storage.cvg.0.data_devices:
echo ${device_list#*,}
[cluster] mgmt_vip= [srvnode_default] network.data.private_interfaces=eth3,eth4 network.data.public_interfaces=eth1,eth2 network.mgmt.interfaces=eth0 bmc.user=None bmc.secret=None network.data.private_ip=None [srvnode-1] hostname=srvnode-1.localdomain roles=primary,openldap_server,kafka_server storage.cvg.0.data_devices=<Values for storage.cvg.0.data_devices from command executed earlier> storage.cvg.0.metadata_devices=<Values for storage.cvg.0.metadata_devices from command executed earlier> [srvnode-2] hostname=srvnode-2.localdomain roles=secondary,openldap_server,kafka_server storage.cvg.0.data_devices=<Values for storage.cvg.0.data_devices from command executed earlier> storage.cvg.0.metadata_devices=<Values for storage.cvg.0.metadata_devices from command executed earlier> [srvnode-3] hostname=srvnode-3.localdomain roles=secondary,openldap_server,kafka_server storage.cvg.0.data_devices=<Values for storage.cvg.0.data_devices from command executed earlier> storage.cvg.0.metadata_devices=<Values for storage.cvg.0.metadata_devices from command executed earlier> [enclosure_default] type=virtual [enclosure-1] [enclosure-2] [enclosure-3]
NOTE :
-
private_ip, bmc_secret, bmc_user
should be None for VM. -
mgmt_vip
must be provided for 3 node deployments.
-
Manual deployment of VM consists of following steps from Auto-Deploy, which could be individually executed:
NOTE: Ensure VM Preparation for Deployment has been addressed successfully before proceeding
Bootstrap VM(s): Run setup_provisioner provisioner cli command:
If using remote hosted repos:
provisioner setup_provisioner srvnode-1:$(hostname -f) \
--logfile --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm --config-path ~/config.ini \
--dist-type bundle --target-build ${CORTX_RELEASE_REPO}
If using remote hosted repos:
provisioner setup_provisioner --console-formatter full --logfile \
--logfile-filename /var/log/seagate/provisioner/setup.log --source rpm \
--config-path ~/config.ini --ha \
--dist-type bundle \
--target-build ${CORTX_RELEASE_REPO} \
srvnode-1:<fqdn:primary_hostname> \
srvnode-2:<fqdn:secondary_hostname> \
srvnode-3:<fqdn:secondary_hostname>
Example:
provisioner setup_provisioner \
--logfile --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm --config-path ~/config.ini \
--ha --dist-type bundle --target-build ${CORTX_RELEASE_REPO} \
srvnode-1:host1.localdomain srvnode-2:host2.localdomain srvnode-3:host3.localdomain
Update data from config.ini into Salt pillar. Export pillar data to provisioner_cluster.json.
provisioner configure_setup ./config.ini <number of nodes in cluster>
salt-call state.apply components.system.config.pillar_encrypt
provisioner confstore_export
NOTE :
-
target-build should be link to base url for hosted 3rd_party and cortx_iso repos
-
For
--target_build
use builds from below url based on OS:
centos-7.8.2003: <build_url>/centos-7.8.2003/ OR Contact Cortx RE team for latest URL. -
This command will ask for each node's root password during initial cluster setup.
This is one time activity required to setup passwordless ssh across nodes. -
For setting up a cluster of more than 3 nodes do append
--name <setup_profile_name>
to auto_deploy_vm command input parameters.
Once deployment is bootstrapped (auto_deploy or setup_provisioner command is executed successfully), verify salt master setup on both nodes (setup verification checklist)
salt '*' test.ping
salt "*" service.stop puppet
salt "*" service.disable puppet
salt '*' pillar.get release
salt '*' grains.get node_id
salt '*' grains.get cluster_id
salt '*' grains.get roles
If provisioner setup is completed and you want to deploy in stages based on component group.
NOTE: At any stage, if there is a failure, it is advised to run destroy for that particular group.
For help on destroy commands, refer to https://github.com/Seagate/cortx-prvsnr/wiki/Teardown-Node(s)#targeted-teardown
- System component group
Single NodeMulti Nodeprovisioner deploy --setup-type single --states system
provisioner deploy --setup-type 3_node --states system
- Prereq component group
Single NodeMulti Nodeprovisioner deploy --setup-type single --states prereq
provisioner deploy --setup-type 3_node --states prereq
-
Utils component Single Node
provisioner deploy --setup-type single --states utils
Multi Node
provisioner deploy --setup-type 3_node --states utils
-
IO path component group
Single Nodeprovisioner deploy --setup-type single --states iopath
Multi Node
provisioner deploy --setup-type 3_node --states iopath
-
Control path component group
Single Nodeprovisioner deploy --setup-type single --states controlpath
Multi Node
provisioner deploy --setup-type 3_node --states controlpath
- HA component group
Single NodeMulti Nodeprovisioner deploy --setup-type single --states ha
provisioner deploy --setup-type 3_node --states ha
-
Execute the following command on primary node to start the cluster:
cortx cluster start
-
Verify Cortx cluster status:
hctl status
-
Run auto_deploy_vm provisioner cli command:
If using remote hosted repos:
provisioner auto_deploy_vm srvnode-1:$(hostname -f) \ --logfile --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm --config-path ~/config.ini \ --dist-type bundle --target-build ${CORTX_RELEASE_REPO}
If using remote hosted repos:
provisioner auto_deploy_vm --console-formatter full --logfile \ --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm \ --config-path ~/config.ini --ha \ --dist-type bundle \ --target-build '<path to base url for hosted repo>' \ srvnode-1:<fqdn:primary_hostname> \ srvnode-2:<fqdn:secondary_hostname> \ srvnode-3:<fqdn:secondary_hostname>
Example:
provisioner auto_deploy_vm \ --logfile --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm --config-path ~/config.ini \ --ha --dist-type bundle --target-build ${CORTX_RELEASE_REPO} \ srvnode-1:host1.localdomain srvnode-2:host2.localdomain srvnode-3:host3.localdomain
-
Start cluster (irrespective of number of nodes):
NOTE: Execute this command only on primary node (srvnode-1).cortx cluster start
-
Check if the cluster is running:
hctl status
NOTE :
-
target-build should be link to base url for hosted 3rd_party and cortx_iso repos
-
For
--target_build
use builds from below url based on OS:
centos-7.8.2003: <build_url>/centos-7.8.2003/ OR Contact Cortx RE team for latest URL. -
This command will ask for each node's root password during initial cluster setup.
This is one time activity required to setup password-less ssh across nodes. -
For setting up a cluster of more than 3 nodes do append
--name <setup_profile_name>
to auto_deploy_vm command input parameters.