Skip to content

Commit

Permalink
Oct 2022 Connections v8 release (#218)
Browse files Browse the repository at this point in the history
  • Loading branch information
sabrina-yee authored Oct 27, 2022
1 parent e524a41 commit d3e5e47
Show file tree
Hide file tree
Showing 211 changed files with 8,329 additions and 603 deletions.
263 changes: 160 additions & 103 deletions README.md

Large diffs are not rendered by default.

24 changes: 16 additions & 8 deletions documentation/QUICKSTART.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ To set this up, you will need at least four machines (for this example, let us s
- ansible.internal.example.com is going to run Ansible commands (i.e. Ansible controller). Typical laptop grade environment should be suffice.
- web.internal.example.com is going to host, in this example, only Nginx and Haproxy. This is needed here only for the Customizer. At least 1CPU and 2G of RAM are preferable.
- connections.internal.example.com is going to host IBM WebSphere, IHS and HCL Connections. We will put here also OpenLDAP with 10 users, and IBM DB2. NFS will be set for shared data and message stores. HCL Connections will be deployed as a small topology (single JVM). Here you need at least two CPUs and at least 16G of RAM for having everything self contained.
- cp.internal.example.com is going to host Kubernetes, be NFS server for persistent volumes, Docker Registry, and Component Pack on top of it. You need at least 32G of RAM to install full offering and at least 8 CPUs.
- cp.internal.example.com is going to host Kubernetes, be NFS server for persistent volumes, Container Runtime, and Component Pack on top of it. You need at least 32G of RAM to install full offering and at least 8 CPUs.

Once the installation is done, we will access our HCL Connections login page through https://connections.example.com/

Example inventory files for this Quick Start can be found in environments/examples/cnx7/quick_start folder.
Example inventory files for this Quick Start can be found in environments/examples/cnx8/quick_start folder.

# Setting up your environment

Expand Down Expand Up @@ -185,23 +185,31 @@ There are two things you need to adapt before you try the installation:
To set up HCL Connections with DB2, OpenLDAP, TDI, IBM WebSphere, IBM IHS and HCL Connections, run:

```
ansible-playbook -i environments/examples/cnx7/quick_start/inventory.ini playbooks/setup-connections-complete.yml
ansible-playbook -i environments/examples/cnx8/quick_start/inventory.ini playbooks/setup-connections-complete.yml
```

## Setting up Component Pack for HCL Connections with all the dependencies

To set up Component Pack with Kubernetes, Docker, Docker Registry, NFS, Nginx and Haproxy all configured to support Customizer as well, run:
To set up Component Pack with Kubernetes, Container Runtime, NFS, Nginx and Haproxy all configured to support Customizer as well, run:

```
ansible-playbook -i environments/examples/cnx7/quick_start/inventory.ini playbooks/setup-component-pack-complete.yml
ansible-playbook -i environments/examples/cnx8/quick_start/inventory.ini playbooks/setup-component-pack-complete-harbor.yml
```

## Setting up Connections Docs 2.0.1
## Run post install task

To set up Connections Docs 2.0.1, just run:
Once your Component Pack installation is done, run this playbook to set up some post installation:

```
ansible-playbook -i environments/examples/cnx7/quick_start/inventory.ini playbooks/hcl/setup-connections-docs.yml
ansible-playbook -i environments/examples/cnx8/quick_start/inventory.ini playbooks/hcl/connections-post-install.yml
```

## Setting up Connections Docs 2.0.2

To set up Connections Docs 2.0.2, just run:

```
ansible-playbook -i environments/examples/cnx8/quick_start/inventory.ini playbooks/hcl/setup-connections-docs.yml
```

Note: if you are using old format of inventory files, it is all backwards compatible. The only thing that you need to add there is cnx_was_servers to your connections inventory (to make it same as done for docs already).
Expand Down
23 changes: 16 additions & 7 deletions documentation/VARIABLES.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,7 @@ Name | Default | Description
---- | --------| -------------
was_repository_url | *none* - required | WebSphere install kit download location
was_fixes_repository_url | *none* - required | WebSphere Fix Pack kit location to download
was_major_version | 8 | WebSphere major version
was_version | 8.5.5000.20130514_1044 | WebSphere Base version
was_fp_version | 8.5.5021.20220202_1245 | WebSphere Fix Pack version
java_version | 8.0.6015.20200826_0935 | (only for Java upgrade during FP16/18 install)
Expand Down Expand Up @@ -199,7 +200,7 @@ cnx_package | HCL_Connections_7.0_lin.tar | Connections install kit file
connections_wizards_package_name | HCL_Connections_7.0_wizards_lin_aix.tar | Connections Wizard kit file
setup_connections_wizards | true | true will run the Connections database wizard
cnx_force_repopulation | false | true will drop the Connections databases and recreate them in `setup-connections-wizards.yml` playbook
cnx_major_version | "7" | Connections major version to install
cnx_major_version | "8" | Connections major version to install
cnx_fixes_version | *none* - optional | If defined (eg. 6.5.0.0_CR1) will install the CR version
cnx_fixes_files | *none* - optional | If defined (eg. HC6.5_CR1.zip") and cnx_fixes_version is set, will download the CR install kit
cnx_application_ingress | *none* - required | Set as *dynamicHosts* in LotusConnections-config.xml
Expand Down Expand Up @@ -243,6 +244,7 @@ Name | Default | Description
---- | --------| -------------
cnx_docs_download_location | *none* - required | Connections Docs kit download location
cnx_docs_package_name | HCL_Docs_v202.zip | Connections Docs install kit file
docs_install_version | 2.0.2 | Docs version to be installed
hcl_program_folder | /opt/HCL | Location to store Docs program folders
conversion_install_folder | DocsConversion | Conversion program folder name
editor_install_folder | DocsEditor | Editor program folder name
Expand Down Expand Up @@ -299,7 +301,7 @@ docker_registry_url | {{ hostvars[groups['docker_registry'][0]]['inventory_hostn
registry_user | admin | Docker Registry user name
registry_password | password | Docker Registry user password
overlay2_enabled | true | true enables OverlayFS storage driver
kubernetes_version | 1.21.7 | Kubernetes version to be installed
kubernetes_version | 1.24.1 | Kubernetes version to be installed
kube_binaries_install_dir | /usr/bin | kuberneters binary install directory
kube_binaries_download_url | https://storage.googleapis.com/kubernetes-release/release | kuberneters binary download path
ic_internal | localhost | Connections server internal frontend host (eg. IHS host)
Expand Down Expand Up @@ -345,14 +347,16 @@ setup_customizer | true | True will deploy mw-proxy and setup customizations
elasticsearch_default_version | 7 | Default ElasticSearch version
elasticsearch_default_port | 30098 | ElasticSearch port
setup_elasticsearch | false | True will deploy ElasticSearch 5 (for Connections 6.5CR1)
setup_elasticsearch7 | true | True will deploy ElasticSearch 7 (for Connections 7)
setup_elasticsearch7 | false | True will deploy ElasticSearch 7 (for Connections 7)
setup_opensearch | True | True will deploy OpenSearch
setup_ingress | false | True will setup old ingress controller (for Connections 6.5CR1)
setup_community_ingress | true | True will setup community ingress controller (for Connections 7 onwards)
setup_tailored_exp | true | True will deploy Tailored Experience features for communities (for Connections 7 onwards)
setup_orientme | true | True will deploy Orient Me (make sure the corresponding setup_elasticsearch7 var is set to true)
setup_sanity | true | True will deploy sanity-watcher
setup_kudosboards | true | True will deploy Activities Plus, must have a license key (defined in kudos_boards_licence) for the feature to work
kudos_boards_licence | *none* | Activities Plus license key
setup_sanity | false | True will deploy sanity-watcher (for Connections 7)
setup_huddoboards | true | (replace the old setup_kudosboards var) True will deploy Activities Plus, must have a license key (defined in huddo_boards_licence) for the feature to work
setup_huddoboards_ext | false | True will install Huddo Boards extension
huddo_boards_licence | *none* | Activities Plus license key
setup_elasticstack | false | True will setup ElasticStack
setup_elasticstack7 | false | True will setup ElasticStack7
setup_outlook_addin | true | True will deploy Outlook Desktop Plugin
Expand All @@ -364,7 +368,12 @@ integrations_msteams_tenant_id | changeme | Tenant ID to configure Microsoft Tea
integrations_msteams_client_id | changeme | Client ID to configure Microsoft Teams integration
integrations_msteams_client_secret | changeme | Kubernetes secret name for Microsoft Teams integration
integrations_msteams_auth_schema | 0 | Auth schema to configure Microsoft Teams integration

opensearch_version | 1.3.0 | Opensearch version
opensearch_replicaset | 3 | Replica count to set in Helm charts for Opensearch
opensearch_cluster_name | opensearch-cluster | Opensearch cluster name
opensearch_default_port | 30099 | Opensearch port
opensearch_ca_password | password | Opensearch CA password
opensearch_key_password | password | Opensearch Key password

### NFS Variables
Name | Default | Description
Expand Down
File renamed without changes.
139 changes: 139 additions & 0 deletions documentation/howtos/connections_upgrade_from_7.0_to_8.0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
# Upgrading HCL Connections using Ansible automation

This automation is used from HCL Connections 7.0 to test HCL Connections and Component Pack upgrades to v8.0.

For this example, we will show:

* How to use the ansible automation to upgrade HCL Connections 7 to HCL Connection 8. This includes migrating data from mongodb v3 to mongodb v5 and elasticsearch7 to opensearch manually.
* What is the logic behind it.

NOTE: If this is the very first document you are landing on, please ensure that you read already our [README.md](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/README.md) and our [Quick Start Guide](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/documentation/QUICKSTART.md), specially if you never used Ansible and/or this automation before.

Before you proceed, let's analyse very quickly few what is important points.

### Setting up your inventory file

Please note that if needed user can overwrite defaults using [files in this folder ](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/environments/examples/cnx8/db2/). We will explain here what it is doing:

* We have our HCL Connections Wizards and HCL Connections installer living in a folder called Connections8, so we are setting the right paths here [#1](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/environments/examples/cnx8/db2/group_vars/all.yml#L40) and [#2](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/environments/examples/cnx8/db2/group_vars/all.yml#L47)
* Check default supported version of IBM WebSphere [here](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/documentation/VARIABLES.md#was_fp_version:~:text=WebSphere%20Base%20version-,was_fp_version). If we want to install specific version of IBM WebSphere, [specify the location here](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/environments/examples/cnx8/db2/group_vars/all.yml#L43-L45).
* As connections kit names are different for different versions, so we can explicitly specify [Connections install kit name](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/environments/examples/cnx8/db2/group_vars/all.yml#L50) and [Connections Wizard package name](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/environments/examples/cnx8/db2/group_vars/all.yml#L51). Check out default values here [#1](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/documentation/VARIABLES.md#:~:text=location%20to%20download-,cnx_package) and [#2](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/documentation/VARIABLES.md#:~:text=connections_wizards_package_name)
* Desired version of docker, helm, kubernetes can be set using variables docker_version, kubernetes_version, helm_version respectively set in the [inventory file](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/environments/examples/cnx8/db2/group_vars/all.yml). [Click here](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/documentation/VARIABLES.md) to see more details and supported default versions of these softwares.

### Choosing operating system version

Use CentOS 7 Or RHEL 8.6 (the later CentOS 7 Or RHEL 8.6 the better). For this scenario, let's say you are using CentOS 7.9. Be always sure, as whenever installing any of the components mentioned here, using automation or manually, to configure machine properly and just to be on the safe side run yum update before you start.


## Upgrading HCL Connections from 7.0 to 8.0

### Prerequisite

At this step we assume that we have a running connections 7 with CP installed.

### Setting up your inventory file

You already got the idea that all there is with the installation/upgrade is handled by manipulating variables in your inventory files.
For this example, we will reference [this example inventory folder](https://github.com/HCL-TECH-SOFTWARE/connections-automation/tree/main/environments/examples/cnx8/db2).

If you make a simple diff between [this file](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/environments/examples/cnx8/db2/group_vars/all.yml) and [this file](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/environments/examples/cnx7/db2/group_vars/all.yml) you will see that now:

* We are pointing to a folders with Connections 8 and WAS ND to the [default supported version](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/documentation/VARIABLES.md#was_fp_version:~:text=WebSphere%20Base%20version-,was_fp_version).
* We are not overwriting any package and file name, as by default Ansible will assume that, in this moment, default version is 8, and package names for version 8 are being used.

### Running the upgrade

Run below playbook. This will add/remove new IHS configurations if any:

```
ansible-playbook -i environments/examples/cnx8/db2/inventory.ini playbooks/third_party/setup-webspherend.yml
```

And as a next step, let's upgrade HCL Connections to 8.0:

```
ansible-playbook -i environments/examples/cnx8/db2/inventory.ini playbooks/hcl/setup-connections-only.yml
```

Next step is to upgrade component pack 7.0 to 8.0

Run below playbooks to upgrade and configure nginx and haproxy for the HCL Connections 8.0

```
ansible-playbook -i environments/examples/cnx8/db2/inventory.ini playbooks/third_party/setup-nginx.yml
ansible-playbook -i environments/examples/cnx8/db2/inventory.ini playbooks/third_party/setup-haproxy.yml
```

Run below playbook to configure NFS. This playbook will also create and configure OpenSearch and Mongo 5 folders.

```
ansible-playbook -i environments/examples/cnx8/db2/inventory.ini playbooks/third_party/setup-nfs.yml
```

Run below playbook to install containerd(container runtime).

```
ansible-playbook -i environments/examples/cnx8/db2/inventory.ini playbooks/third_party/setup-containerd.yml
```

To deploy Component Pack 8, we use HCL Software’s Harbor container registry. Also we strongly recommend that you [install container runtime](https://help.hcltechsw.com/connections/v8/admin/install/upgrade_considerations.html#upgrade_considerations__section_sqh_ktx_bvb) (containerd installation playbook is already mentioned in the previous step), Follow the steps in [migrating from Docker to containerd](https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/), [upgrade helm to version 3.7.2](https://help.hcltechsw.com/connections/v8/admin/install/upgrade_considerations.html#upgrade_considerations__section_bqv_2vx_bvb) and [upgrade kubernetes](https://help.hcltechsw.com/connections/v8/admin/install/upgrade_considerations.html#upgrade_considerations__section_avm_v5x_bvb) before moving to Component pack 8.

Kubernetes can be upgraded using below playbook. Add 'upgrade_version' variable in the [inventory file](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/environments/examples/cnx8/db2/group_vars/all.yml). Follow [kubernetes official document](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) on how to upgrade kubernetes version.

```
ansible-playbook -i environments/examples/cnx8/db2/inventory.ini playbooks/third_party/kubernetes/upgrade-kubernetes.yml
```

For HCL Connections 8 we need to upgrade mongodb from v3 to v5 and OpenSearch replaces ElasticSearch7. So we need to backup data. This is a manual step. Please refer below links-

[Backup mongo3 data](https://help.hcltechsw.com/connections/v8/admin/install/cp_install_services_tasks.html#cp_install_services_tasks__backup_mongo3)

[Backup ElasticSearch 7 data](https://help.hcltechsw.com/connections/v8/admin/install/cp_install_services_tasks.html#cp_install_services_tasks__backup_es7)

Delete ingresses-
Remove ingresses before Component Pack deployment, otherwise the infrastructure will fail:

```
kubectl delete ingress -n connections $(kubectl get ingress -n connections | awk '{print $1}' | grep -vE "NAME")
```

Access to the HCL Harbor registry is needed to install the Component Pack. You can provide the Harbor credentials as environment variables.

```
set HARBOR_USERNAME=<<harbor username>>
set HARBOR_USERNAME=<<Harbor password>>
```

Add Harbor variables to the [inventory file](https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/main/environments/examples/cnx8/db2/group_vars/all.yml#L85-L86)

Then execute

```
ansible-playbook -i environments/examples/cnx8/db2/inventory.ini playbooks/setup-component-pack-complete-harbor.yml
```

To migrate data from mongo 3 to mongo 5, [perform these steps](https://help.hcltechsw.com/connections/v8/admin/install/migrating_data_mongodb_v3_v5.html). This is a manual step.

To migrate data from Elasticsearch 7 to OpenSearch, [perform these steps](https://help.hcltechsw.com/connections/v8/admin/install/cp_migrate_data_from_es7_to_opensearch.html). This is a manual step.

After this we will delete all the pods using below command on the kubernetes master node. This command will restart all the CP pods.

```
kubectl delete pods -n connections $(kubectl get pods -n connections | awk '{print $1}' | grep -vE "NAME|bootstrap")
```

Now run post installation tasks
Once your Component Pack installation is done, run this playbook to set up some post installation:

```
ansible-playbook -i environments/examples/cnx8/db2/inventory.ini playbooks/hcl/connections-post-install.yml
```

Once this is done, log in to your HCL Connections 8 installation, just to confirm that all is fine.


## Final words

As you probably already noticed, same playbooks can be used for both installations and upgrades, and it is designed that way. Worst case scenario that can happen is that some services will be restarted while playbooks are ensuring that everything is the way it is described.

And this gives you already the idea - it is very easy, this way, to deploy if needed new build every day, or even multiple times per day, by ensuring that you are simply having a new package uploaded to the right folder, without even changing anything in Ansible itself.
4 changes: 2 additions & 2 deletions documentation/howtos/setup_connections_with_different_database_backends.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ With the latest changes, what actually happens is:
- TDI will use db_username, db_password, db_hostname, db_port to configure itself to successfully run.
- HCL Connections response file will be populated based on *_db structures.

## Using Oracle 19c as a database backend for HCL Connections 7
## Using Oracle 19c as a database backend for HCL Connections 8

If you want to try HCL Connections with Oracle 19c, now that is possible as well.

Expand Down Expand Up @@ -157,7 +157,7 @@ ic360_db={ 'name': 'LSCONN', 'server': 'db1.internal.example.com', 'user': 'ESSU
- IBM TDI will find oracle_servers group name in the inventory, and because of that decide to download Oracle 19c JDBC driver from oracle_download_location necessary to successfully execute TDI scripts.
- All variables with *_db will be added to the response file for HCL Connections installer.

## Using Microsoft SQL Server 2019 as a database backend for HCL Connections 7
## Using Microsoft SQL Server 2019 as a database backend for HCL Connections 8

This was, obviously, implemented and tested on Linux (CentOS 7.9 to be more specific).

Expand Down
Loading

0 comments on commit d3e5e47

Please sign in to comment.