-
Notifications
You must be signed in to change notification settings - Fork 40
Provisioner Setup
Checklist:
- Do you see the devices on execution of this command: lsblk ?
- Do both the systems on your setup have valid hostnames, are the hostnames accessible: ping ?
- Do you have IPs' assigned to all NICs eth0, eth1, eth2, eth3 and eth4?
- Identify primary node and run below commands on primary node
NOTE: For single-node VM, the VM node itself is treated as primary node.
-
Set root user password on all nodes:
sudo passwd root
-
SSH Connectivity Check
ssh root@node exit
-
Storage Configuration Check The VM should have exactly 2 attached disks
lsblk -d|grep -E 'sdb|sdc'|wc -l
-
Install provisioner api
- Set repository URL
CORTX_RELEASE_REPO="<URL to Cortx R1 stack release repo>"
- Install Provisioner API and requisite packages
yum install -y yum-utils yum-config-manager --add-repo "${CORTX_RELEASE_REPO}/3rd_party/" yum install --nogpgcheck -y python3 python36-m2crypto salt salt-master salt-minion rm -rf /etc/yum.repos.d/*3rd_party*.repo yum-config-manager --add-repo "${CORTX_RELEASE_REPO}/cortx_iso/" yum install --nogpgcheck -y python36-cortx-prvsnr rm -rf /etc/yum.repos.d/*cortx_iso*.repo yum clean all rm -rf /var/cache/yum/ # pip3 dependencies pip3 install jsonschema==3.0.2 requests
- Set repository URL
-
Verify provisioner version (0.36.0 and above)
provisioner --version
-
Create config.ini file to some location:
IMPORTANT NOTE: Please check every details in this file correctly according to your node.
Verify interface names are correct as per your nodeUpdate required details in
~/config.ini
use sample config.inivi ~/config.ini
[storage] type=other [srvnode-1] hostname=host1.localdomain network.data.private_ip=None network.data.public_interfaces=eth1,eth2 network.data.private_interfaces=eth3,eth4 network.mgmt.interfaces=eth0 bmc.user=None bmc.secret=None
- Run provisioner_setup cli command:
provisioner setup_provisioner srvnode-1:$(hostname -f) --logfile --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm --config-path ~/config.ini --dist-type bundle --target-build ${CORTX_RELEASE_REPO} --pypi-repo
- Update pillar data
provisioner configure_setup /root/config.ini 1
- Encrypt all passwords
salt-call state.apply components.system.config.pillar_encrypt
- Export pillar data in json
provisioner pillar_export
Once provisioner setup is done, verify salt master setup on nodes (setup verification checklist)
salt '*' test.ping
salt "*" service.stop puppet
salt "*" service.disable puppet
salt '*' pillar.get release
salt '*' grains.get node_id
salt '*' grains.get cluster_id
salt '*' grains.get roles
Deploy system related components
provisioner deploy_vm --states system --setup-type single
Deploy all 3rd party components
provisioner deploy_vm --states prereq --setup-type single
NOTE :
-
target-build should be link to base url for hosted 3rd_party and cortx_iso repos
-
For
--target_build
use builds from below url based on OS:
centos-7.8.2003: <build_url>/centos-7.8.2003/ OR Contact Cortx RE team for latest URL. -
This command will ask for each node's root password during initial cluster setup.
This is one time activity required to setup password-less ssh across nodes. -
For setting up a cluster of more than 3 nodes do append
--name <setup_profile_name>
to auto_deploy_vm command input parameters.
NOTE :
-
target-build should be link to base url for hosted 3rd_party and cortx_iso repos
-
For
--target_build
use builds from below url based on OS:
centos-7.8.2003: <build_url>/centos-7.8.2003/ OR Contact Cortx RE team for latest URL. -
This command will ask for each node's root password during initial cluster setup.
This is one time activity required to setup passwordless ssh across nodes. -
For setting up a cluster of more than 3 nodes do append
--name <setup_profile_name>
to auto_deploy_vm command input parameters.
Execute script
/opt/seagate/cortx/provisioner/cli/destroy-vm --ctrlpath-states --iopath-states --prereq-states --system-states
To teardown the bootstrap step:
- Unmount gluster volumes
umount $(mount -l | grep gluster | cut -d ' ' -f3)
- Stop services
systemctl stop glustersharedstorage glusterfsd glusterd systemctl stop salt-minion salt-master
- Uninstall the rpms
yum erase -y cortx-prvsnr cortx-prvsnr-cli # Cortx Provisioner packages yum erase -y gluster-fuse gluster-server # Gluster FS packages yum erase -y salt-minion salt-master salt-api # Salt packages yum erase -y python36-m2crypto # Salt dependency yum erase -y python36-cortx-prvsnr # Cortx Provisioner API packages yum autoremove -y yum clean all rm -rf /var/cache/yum # Remove cortx-py-utils pip3 uninstall -y cortx-py-utils # Cleanup pip packages pip3 freeze|xargs pip3 uninstall -y
- Cleanup bricks and other directories
# Cortx software dirs rm -rf /opt/seagate/cortx rm -rf /opt/seagate/cortx_configs rm -rf /opt/seagate # Bricks cleanup test -e /var/lib/seagate && rm -rf /var/lib/seagate test -e /srv/glusterfs && rm -rf /srv/glusterfs test -e /var/cache/salt && rm -rf /var/cache/salt # Cleanup Salt rm -rf /var/cache/salt rm -rf /etc/salt # Cleanup Provisioner profile directory rm -rf /root/.provisioner
- Cleanup SSH
rm -rf /root/.ssh
-
Dev_Guide
-
Framework
-
Known-issues
-
Miscellaneous-Guides
-
Research
-
Setup-Guides
-
User-Guides
-
Alternative-method-for-removing-LVM-metadata-information-from-enclosure-volumes