Skip to content
This repository has been archived by the owner on Aug 29, 2018. It is now read-only.

Latest commit

 

History

History
1099 lines (776 loc) · 32.7 KB

README.asciidoc

File metadata and controls

1099 lines (776 loc) · 32.7 KB

About

This module helps install OpenShift Origin Platform As A Service. Through the declaration of the openshift_origin class, you can configure the OpenShift Origin Broker, Node and support services including ActiveMQ, MongoDB, named and OS settings including firewall, services, and ntp.

Build Status

Build Status

Authors

  • Jamey Owens

  • Ben Klang

  • Ben Langfeld

  • Krishna Raman

  • N. Harrison Ripps

Requirements

  • Puppet >= 3

  • Facter >= 1.6.17

Installation

The module can be installed directly via the puppet utility from PuppetForge:

puppet module install openshift/openshift_origin

Additionally, the module can be obtained from the github repository.

  1. Download the Zip file from github

  2. Upload the Zip file to your Puppet Master.

  3. Unzip the file. This will create a new directory called puppet-openshift_origin-{commit hash}

  4. Rename this directory to just openshift_origin and place it in your modulepath.

Configuration

There is one class (openshift_origin) that needs to be declared on all nodes managing any component of OpenShift Origin. These nodes are configured using the parameters of this class.

Using Parameterized Classes

Example: Single host (broker+console+node) which uses the Avahi MDNS and htpasswd Auth plugin:
class { 'openshift_origin' :
  domain                        => 'example.com',
  node_unmanaged_users          => ['root'],
  development_mode              => true,
  conf_node_external_eth_dev    => 'eth0',
  install_method                => 'yum',
  register_host_with_nameserver => true,
  broker_auth_plugin            => 'htpasswd',
  broker_dns_plugin             => 'avahi',
}
Example: Single host (broker+console+node) which uses the Kerberos Auth plugin and GSS-TSIG.
class { 'openshift_origin' :
  domain                        => 'example.com',
  node_unmanaged_users          => ['root'],
  development_mode              => true,
  conf_node_external_eth_dev    => 'eth0',
  install_method                => 'yum',
  register_host_with_nameserver => true,
  broker_auth_plugin            => 'remote-user',
  broker_dns_plugin             => 'nsupdate',
  bind_krb_principal            => $hostname,
  bind_krb_keytab               => '/etc/dns.keytab'
  broker_krb_keytab             => '/etc/http.keytab',
  broker_krb_auth_realms        => 'EXAMPLE.COM',
  broker_krb_service_name       => $hostname,
}

Please note:

  • The Broker needs to be enrolled in the KDC as a host, host/node_fqdn as well as a service, HTTP/node_fqdn

  • Keytab should be generated, is located on the Broker machine, and Apache should be able to access it (chown apache <kerberos_keytab>)

  • Like the example config below:

    • set broker_auth_plugin to 'kerberos'

    • set broker_krb_keytab and bind_krb_keytab to the absolute file location of the keytab

    • set broker_krb_auth_realms to the kerberos realm that the Broker host is enrolled with

    • set broker_krb_service_name to the FQDN of the enrolled kerberos service, e.g. $hostname

  • After setup, to test:

    • authentication: kinit <user> then curl -Ik --negotiate -u : <node_fqdn>

    • GSS-TSIG (should return nil):

      $ cd /var/www/openshift/broker
      $ bundle --local
      $ rails console
      $ d = OpenShift::DnsService.instance
      $ d.register_application "appname", "namespace", "node_fqdn"
      => nil
    • For any errors, on the Broker, check /var/log/openshift/broker/httpd/error_log.

Puppet Parameters

An exhaustive list of the parameters you can specify with puppet configuration follows.

Note
Passwords used to secure various services. You are advised to specify only alphanumeric values in this script as others may cause syntax errors depending on context. If non-alphanumeric values are required, update them separately after installation.

roles

Choose from the following roles to be configured on this node.

  • broker - Installs the broker and console.

  • node - Installs the node and cartridges.

  • msgserver - Installs ActiveMQ message broker.

  • datastore - Installs MongoDB (not sharded/replicated)

  • nameserver - Installs a BIND dns server configured with a TSIG key for updates.

  • load_balancer - Installs HAProxy and Keepalived for Broker API high-availability.

Default: ['broker','node','msgserver','datastore','nameserver']

Note
Multiple servers are required when using the load_balancer role.

install_method

Choose from the following ways to provide packages:

  • none - install sources are already set up when the script executes (default)

  • yum - set up yum repos manually

    • repos_base

    • os_repo

    • os_updates_repo

    • jboss_repo_base

    • jenkins_repo_base

    • optional_repo

Default: yum

parallel_deployment

This flag is used to control some module behaviors when an outside utility (like oo-install) is managing the deployment of OpenShift across multiple hosts simultaneously. Some configuration tasks can"t be performed during a multi-host parallel installation and this boolean enables the user to indicate whether or not thos tasks should be attempted.

Default: false

repos_base

Base path to repository for OpenShift Origin

Default: Nightlies

architecture

CPU Architecture to use for the definition OpenShift Origin yum repositories

Default: $::architecture fact

Note
Currently only the x86_64 architecture is supported.

override_install_repo

Repository path override. Uses dependencies from repos_base but uses override_install_repo path for OpenShift RPMs. Used when doing local builds.

Default: none

os_repo

The URL for a RHEL/Centos 6 yum repository used with the "yum" install method. Should end in x86_64/os/.

Default: no change

os_updates_repo

The URL for a RHEL/Centos 6 yum updates repository used with the "yum" install method. Should end in x86_64/.

Default: no change

jboss_repo_base

The URL for a JBoss repositories used with the "yum" install method. Does not install repository if not specified.

jenkins_repo_base

The URL for a Jenkins repositories used with the "yum" install method. Does not install repository if not specified.

optional_repo

The URL for a EPEL or optional repositories used with the "yum" install method. Does not install repository if not specified.

domain

Default: example.com The network domain under which apps and hosts will be placed.

broker_hostname

node_hostname

nameserver_hostname

msgserver_hostname

datastore_hostname

Default: the root plus the domain, e.g. broker.example.com - except nameserver=ns1.example.com

These supply the FQDN of the hosts containing these components. Used for configuring the host’s name at install, and also for configuring the broker application to reach the services needed.

Note
if installing a nameserver, the script will create DNS entries for the hostnames of the other components being installed on this host as well. If you are using a nameserver set up separately, you are responsible for all necessary DNS entries.

datastore1_ip_addr|datastore2_ip_addr|datastore3_ip_addr

Default: undef

IP addresses of the first 3 MongoDB servers in a replica set. Add datastoreX_ip_addr parameters for larger clusters.

nameserver_ip_addr

IP of a nameserver instance or current IP if installing on this node. This is used by every node to configure its primary name server.

Default: the current IP (at install)

bind_key

When the nameserver is remote, use this to specify the key for updates. This is the "Key:" field from the .private key file generated by dnssec-keygen. This field is required on all nodes.

bind_key_algorithm

When using a BIND key, use this algorithm for the BIND key.

Default: HMAC-MD5

bind_krb_keytab

When the nameserver is remote, Kerberos keytab together with principal can be used instead of the dnssec key for updates.

bind_krb_principal

When the nameserver is remote, this Kerberos principal together with Kerberos keytab can be used instead of the dnssec key for updates.

Example: 'DNS/[email protected]'

aws_access_key_id

This and the next value are Amazon AWS security credentials. The aws_access_key_id is a string which identifies an access credential.

aws_secret_key

This is the secret portion of AWS Access Credentials indicated by the aws_access_key_id

aws_zone_id

This is the ID string for an AWS Hosted zone which will contain the OpenShift application records.

conf_nameserver_upstream_dns

List of upstream DNS servers to use when installing a nameserver on this node.

Default: ['8.8.8.8']

broker_ip_addr

This is used for the node to record its broker. Also is the default for the nameserver IP if none is given.

Default: the current IP (at install)

broker_cluster_members

An array of broker hostnames that will be load-balanced for high-availability.

Default: undef

broker_cluster_ip_addresses

An array of Broker IP addresses within the load-balanced cluster.

Default: undef

broker_virtual_ip_address

The virtual IP address that will front-end the Broker cluster.

Default: undef

broker_virtual_hostname

The hostame that represents the Broker API cluster. This name is associated to broker_virtual_ip_address and added to Named for DNS resolution.

Default: "broker.${domain}"

load_balancer_master

Sets the state of the load-balancer. Valid options are true or false. true sets the load-balancer as the active listener for the Broker cluster Virtual IP address. Only 1 load_balancer_master is allowed within a Broker cluster.

Default: false

load_balancer_auth_password

The password used to secure communication between the load-balancers within a Broker cluster.

Default: 'changeme'

node_ip_addr

This is used for the node to give a public IP, if different from the one on its NIC.

Default: the current IP (at install)

Node Resource Limits

Note
The following resource limits must be the same with a given district.

node_profile

This is the specific node’s gear profile

Default: small

node_quota_files

The max number of files allowed in each gear.

Default: 80000

node_quota_blocks

The max storage capacity allowed in each gear (1 block = 1024 bytes)

Default: 1048576

node_max_active_gears

max_active_gears is used for limiting/guiding gear placement. For no over-commit, should be (Total System Memory - 1G) / memory_limit_in_bytes

Default: 100

node_no_overcommit_active

no_overcommit_active enforces max_active_gears in a more stringent manner than normal, however it also adds overhead to gear creation, so should only be set to true when needed, like in the case of enforcing single tenancy on a node.

Default: false

node_limits_nproc

max number of processes

Default: 250

node_tc_max_bandwidth

mbit/sec - Total bandwidth allowed for Libra

Default: 800

node_tc_user_share

mbit/sec - one user is allotted…​

Default: 2

node_cpu_shares

cpu share percentage for each gear

Default: 128

node_cpu_cfs_quota_us

Default: 100000

node_memory_limit_in_bytes

gear memory limit in bytes

Default: 536870912 (512MB)

node_memsw_limit_in_bytes

gear max memory limit including swap (512M + 100M swap)

Default: 641728512

node_memory_oom_control

kill processes when hitting out of memory

Default: 1

node_throttle_cpu_shares

cpu share percentage each gear gets at throttle

Default: 128

node_throttle_cpu_cfs_quota_us

Default: 30000

node_throttle_apply_period

Default: 120

node_throttle_apply_percent

Default: 30

node_throttle_restore_percent

Default: 70

node_boosted_cpu_cfs_quota_us

Default: 200000

node_boosted_cpu_shares

cpu share percentage each gear gets while boosted

Default: 30000

configure_ntp

Enabling this configures NTP. It is important that the time be synchronized across hosts because MCollective messages have a TTL of 60 seconds and may be dropped if the clocks are too far out of synch. However, NTP is not necessary if the clock will be kept in synch by some other means.

Default: true

ntp_servers

If configure_ntp is set to true (default), ntp_servers allows users to specify an array of NTP servers used for clock synchronization.

Default: ['time.apple.com iburst', 'pool.ntp.org iburst', 'clock.redhat.com iburst']

Note
Use iburst after every ntp server definition to speed up the initial synchronization.

msgserver_cluster

Set to true to cluster ActiveMQ for high-availability and scalability of OpenShift message queues.

Default: false

msgserver_cluster_members

An array of ActiveMQ server hostnames. Required when parameter msgserver_cluster is set to true.

Default: undef

mcollective_cluster_members

DEPRECATED: use msgserver_cluster_members instead, if both are set they must match

Default: $msgserver_cluster_members

msgserver_password

Password used by ActiveMQ’s amquser. The amquser is used to authenticate ActiveMQ inter-cluster communication. Only used when msgserver_cluster is true.

Default "changeme"

msgserver_admin_password

This is the admin password for the ActiveMQ admin console, which is not needed by OpenShift but might be useful in troubleshooting.

Default: scrambled

msgserver_tls_enabled

This configures mcollective and activemq to use end-to-end encryption over TLS. Use enabled to support both TLS and non-TLS, or strict to only support TLS.

Default: 'disabled'

msgserver_tls_keystore_password

The password used to protect the keystore. It must be greater than 6 characters. This is required.

Default: password

msgserver_tls_ca

Location for certificate ca

Default: /var/lib/puppet/ssl/certs/ca.pem

msgserver_tls_cert

Location for certificate cert

Default: /var/lib/puppet/ssl/certs/${lower_fqdn}.pem

msgserver_tls_key

Location for certificate key

Default: /var/lib/puppet/ssl/private_keys/${lower_fqdn}.pem

mcollective_user

mcollective_password

This is the user and password shared between broker and node for communicating over the mcollective topic channels in ActiveMQ. Must be the same on all broker and node hosts.

Default: mcollective/marionette

mongodb_admin_user

mongodb_admin_password

These are the username and password of the administrative user that will be created in the MongoDB datastore. These credentials are not used by in this script or by OpenShift, but an administrative user must be added to MongoDB in order for it to enforce authentication.

Default: admin/mongopass

Note
The administrative user will not be created if CONF_NO_DATASTORE_AUTH_FOR_LOCALHOST is enabled.

mongodb_broker_user

mongodb_broker_password

These are the username and password of the normal user that will be created for the broker to connect to the MongoDB datastore. The broker application’s MongoDB plugin is also configured with these values.

Default: openshift/mongopass

mongodb_name

This is the name of the database in MongoDB in which the broker will store data.

Default: openshift_broker

mongodb_port

The TCP port used for MongoDB to listen on.

Default: '27017'

mongodb_replicasets

Enable/disable MongoDB replica sets for database high-availability.

Default: false

mongodb_replica_name

The MongoDB replica set name when $mongodb_replicasets is true.

Default: 'openshift'

mongodb_replica_primary

Set the host as the primary with true or secondary with false. Must be set on one and only one host within the mongodb_replicasets_members array.

Default: undef

mongodb_replica_primary_ip_addr

The IP address of the Primary host within the MongoDB replica set.

Default: undef

mongodb_replicasets_members

An array of [host:port] of replica set hosts. Example: ['10.10.10.10:27017', '10.10.10.11:27017', '10.10.10.12:27017']

Default: undef

mongodb_keyfile

The file containing the $mongodb_key used to authenticate MongoDB replica set members.

Default: '/etc/mongodb.keyfile'

mongodb_key

The key used by members of a MongoDB replica set to authenticate one another.

Default: 'changeme'

openshift_user1

openshift_password1

This user and password are entered in the /etc/openshift/htpasswd file as a demo/test user. You will likely want to remove it after installation (or just use a different auth method).

Default: demo/changeme

conf_broker_auth_salt

conf_broker_auth_private_key

Salt and private keys used when generating secure authentication tokens for Application to Broker communication. Requests like scale up/down and jenkins builds use these authentication tokens. This value must be the same on all broker nodes.

Default: Self signed keys are generated. Will not work with multi-broker setup.

conf_broker_default_templates

Customize default app templates for specified framework cartridges. Space-separated list of elements <cartridge-name>|<git url> - URLs must be available for all nodes. URL will be cloned as the git repository for the cartridge at app creation unless the user specifies their own. e.g.: DEFAULT_APP_TEMPLATES=php-5.3|http://example.com/php.git perl-5.10|file:///etc/openshift/cart.conf.d/templates/perl.git WARNING: do not include private credentials in any URL; they would be visible in every app’s cloned repository.

Default: ''

conf_broker_valid_gear_cartridges

Enumerate the set of valid gear sizes for a given cartridge. If not specified, its assumed the cartridge can run on any defined gear size. Space-separated list of elements <cartridge-name>|<size1,size2> e.g.: VALID_GEAR_SIZES_FOR_CARTRIDGE="php-5.3|medium,large jbossews-2.0|large"

Default: ''

The URL to use for logging a user out of the console When set to nothing, no logout link is displayed Default: ''

Relative path to product logo URL

Default: if ose_version == undef '/assets/logo-origin.svg' if ose_version != undef '/assets/logo-enterprise-horizontal.svg'

conf_console_product_title

OpenShift Instance Name

Default: if ose_version == undef 'OpenShift Origin' if ose_version != undef 'Openshift Enterprise'

conf_broker_multi_haproxy_per_node

This setting is applied on a per-scalable-application basis. When set to true, OpenShift will allow multiple instances of the HAProxy gear for a given scalable app to be established on the same node. Otherwise, on a per-scalable-application basis, a maximum of one HAProxy gear can be created for every node in the deployment (this is the default behavior, which protects scalable apps from single points of failure at the Node level).

Default: false

conf_broker_session_secret

conf_console_session_secret

Session secrets used to encode cookies used by console and broker. This value must be the same on all broker nodes.

Default: undef

conf_ha_dns_prefix

conf_ha_dns_suffix

Prefix/Suffix used for Highly Available application URL http://${HA_DNS_PREFIX}${APP_NAME}-${DOMAIN_NAME}${HA_DNS_SUFFIX}.${CLOUD_DOMAIN}

Default prefix: 'ha-' Default suffix: ''

conf_valid_gear_sizes

List of all gear sizes this will be used in this OpenShift installation.

Default: ['small']

conf_default_gear_size

Default gear size if one is not specified.

Default: 'small'

conf_default_gear_capabilities

List of all gear sizes that newly created users will be able to create.

Default: ['small']

conf_default_max_domains

Default max number of domains a user is allowed to use

Default: 10

conf_default_max_gears

Default max number of gears a user is allowed to use

Default: 100

conf_broker_default_region_name

Default region if one is not specified.

Default: ""

conf_broker_allow_region_selection

Should the user be allowed to select the region the application is placed in.

Default: true

conf_broker_use_predictable_gear_uuids

When true, new gear UUIDs (and thus gear usernames) are created with the format: <domain_namespace>­<app_name>­<gear_index>

Default: false

conf_broker_require_districts

When true, gear placement will fail if there are no available districts with the correct gear profile.

Default: true

conf_broker_require_zones

When true, gear placement will fail if there are no available zones with the correct gear profile.

Default: false

conf_broker_zone_min_gear_group

desired minimum number of zones between which gears in application gear groups are distributed.

Default: 1

broker_external_access_admin_console

When true, enable access to the administration console. Authentication for the Administration Console is only handled via the ldap Broker Auth Plugin, using <code>broker_ldap_admin_console_uri</code>

Default: false

broker_dns_plugin

DNS plugin used by the broker to register application DNS entries. Options:

  • nsupdate - nsupdate based plugin. Supports TSIG and GSS-TSIG based authentication. Uses bind_key for TSIG and bind_krb_keytab, bind_krb_principal for GSS_TSIG auth.

  • avahi - sets up a MDNS based DNS resolution. Works only for all-in-one installations.

  • route53 - use AWS Route53 for dynamic DNS service. Requires AWS key ID and secret and a delegated zone ID

Default: 'nsupdate'

broker_auth_plugin

Authentication setup for users of the OpenShift service. Options:

  • mongo - Stores username and password in mongo.

  • kerberos - Kerberos based authentication. Uses broker_krb_service_name, broker_krb_auth_realms, broker_krb_keytab values.

  • htpasswd - Stores username/password in a htaccess file.

  • ldap - LDAP based authentication. Uses broker_ldap_uri.

Default: htpasswd

broker_krb_service_name

The KrbServiceName value for mod_auth_kerb configuration. This value will be prefixed with 'HTTP/' to create the krb5 service principal.

Default: hostname

broker_krb_auth_realms

The KrbAuthRealms value for mod_auth_kerb configuration

broker_krb_keytab

The Krb5KeyTab value of mod_auth_kerb is not configurable — the keytab is expected in /var/www/openshift/broker/httpd/conf.d/http.keytab

broker_ldap_uri

URI to the LDAP server (e.g. ldap://ldap.example.com:389/ou=People,dc=my-domain,dc=com?uid?sub?(objectClass=*)). Set <code>broker_auth_plugin</code> to <code>ldap</code> to enable this feature.

broker_ldap_bind_dn

LDAP DN (Distinguished name) of user to bind to the directory with. (e.g. cn=administrator,cn=Users,dc=domain,dc=com) Default is anonymous bind.

broker_ldap_bind_password

Password of bind user set in broker_ldap_bind_dn. Default is anonymous bind with a blank password.

broker_admin_console_ldap_uri

URI to the LDAP server for admin console access (e.g. ldap://ldap.example.com:389/ou=People,dc=my-domain,dc=com?uid?sub?(objectClass=*)). Set <code>broker_external_access_admin_console</code> to enable this feature

node_shmmax

kernel.shmmax sysctl setting for /etc/sysctl.conf

This setting should work for most deployments but if this is desired to be tuned higher, the general recommendations are as follows:

shmmax = shmall * PAGE_SIZE
- PAGE_SIZE = getconf PAGE_SIZE
- shmall = cat /proc/sys/kernel/shmall

shmmax is not recommended to be a value higher than 80% of total available RAM on the system (expressed in BYTES).

Default: kernel.shmmax = 68719476736

node_shmall

kernel.shmall sysctl setting for /etc/sysctl.conf, this defaults to 2097152 BYTES

This parameter sets the total amount of shared memory pages that can be used system wide. Hence, SHMALL should always be at least ceil(shmmax/PAGE_SIZE).

Default: kernel.shmall = 4294967296

node_container_plugin

Specify the container type to use on the node. * selinux - This is the default OpenShift Origin container type. At this time there are no other supported plugins.

Default: 'selinux'

node_frontend_plugins

Specify one or more plugins to use register HTTP and web-socket connections for applications. Options:

  • apache-mod-rewrite - Mod-Rewrite based plugin for HTTP and HTTPS requests. Well suited for installations with a lot of creates/deletes/scale actions. Deprecated in OSE 2.2.

  • apache-vhost - VHost based plugin for HTTP and HTTPS. Suited for installations with less app create/delete activity. Easier to customize. If apache-mod-rewrite is also selected, apache-vhost will be ignored

  • nodejs-websocket - Web-socket proxy listening on ports 8000/8444

  • haproxy-sni-proxy - TLS proxy using SNI routing on ports 2303 through 2308 requires /usr/sbin/haproxy15 (haproxy-1.5-dev19 or later).

Default: ['apache-vhost','nodejs-websocket']

node_unmanaged_users

List of user names who have UIDs in the range of OpenShift gears but must be excluded from OpenShift gear setups.

Default: []

conf_node_external_eth_dev

External facing network device. Used for routing and traffic control setup.

Default: eth0

conf_node_proxy_ports_per_gear

Number of proxy ports available per gear.

Default: 5

conf_node_public_key

conf_node_private_key

Public and private keys used for gears on the default domain. Both values must be defined or default self signed keys will be generated.

Default: Self signed keys are generated.

conf_node_supplementary_posix_groups

Name of supplementary UNIX group to add a gear to.

conf_node_watchman_service

Enable/Disable the OpenShift Node watchman service

Default: true

conf_node_watchman_gearretries

Number of restarts to attempt before waiting RETRY_PERIOD

Default: 3

conf_node_watchman_retrydelay

Number of seconds to wait before accepting another gear restart

Default: 300

conf_node_watchman_retryperiod

Number of seconds to wait before resetting retries

Default: 28800

conf_node_watchman_statechangedelay

Number of seconds a gear must remain inconsistent with it’s state before Watchman attempts to reset state

Default: 900

conf_node_watchman_statecheckperiod

Wait at least this number of seconds since last check before checking gear state on the Node. Use this to reduce Watchman’s GearStatePlugin’s impact on the system.

Default: 0

conf_node_custom_motd

Define a custom MOTD to be displayed to users who connect to their gears directly. If undef, uses the default MOTD included with the node package.

Default: undef

development_mode

Set development mode and extra logging.

Default: false

register_host_with_nameserver

Setup DNS entries for this host in a locally installed bind DNS instance.

Default: false

dns_infrastructure_zone

The name of a zone to create which will contain OpenShift infrastructure. If this is unset then no infrastructure zone or other artifacts will be created.

Default: ""

dns_infrastructure_key

A dnssec symmetric key which will grant update access to the infrastucture zone resource records.

This is ignored unless dns_infrastructure_zone is set.

Default: ""

dns_infrastructure_key_algorithm

When using a BIND key, use this algorithm for the infrastructure BIND key.

This is ignored unless dns_infrastructure_zone is set.

Default: 'HMAC-MD5'

dns_infrastructure_names

An array of hashes containing hostname and IP Address pairs to populate the infrastructure zone.

This value is ignored unless dns_infrastructure_zone is set.

Hostnames can be simple names or fully qualified domain name (FQDN).

Simple names will be placed in the dns_infrastructure_zone. Matching FQDNs will be placed in the _dns_infrastructure_zone. Hostnames anchored with a dot (.) will be added verbatim.

Default: []

Example
$dns_infrastructure_names = [
  {hostname => "10.0.0.1", ipaddr => "broker1"},
  {hostname => "10.0.0.2", ipaddr => "data1"},
  {hostname => "10.0.0.3", ipaddr => "message1"},
  {hostname => "10.0.0.11", ipaddr => "node1"},
  {hostname => "10.0.0.12", ipaddr => "node2"},
  {hostname => "10.0.0.13", ipaddr => "node3"},
]

manage_firewall

Indicate whether or not this module will configure the firewall for you

Default: false

syslog_enabled

Direct logs to syslog rather than log files. Get more details on https://blog.openshift.com/central-log-management-openshift-enterprise/

Default: undef

syslog_central_server_hostname

Host name of the central log server where rsyslog logs will be forwarded to.

install_cartridges

List of cartridges to be installed on the node. Options:

  • 10gen-mms-agent not available in OpenShift Enterprise

  • cron

  • diy

  • haproxy

  • mongodb

  • nodejs

  • perl

  • php

  • phpmyadmin not available in OpenShift Enterprise

  • postgresql

  • python

  • ruby

  • jenkins

  • jenkins-client

  • mysql for CentOS / RHEL deployments

  • jbosseap requires OpenShift Enterprise JBoss EAP add-on

  • jbossas not available in OpenShift Enterprise

  • jbossews

Default: ['10gen-mms-agent','cron','diy','haproxy','mongodb', 'nodejs','perl','php','phpmyadmin','postgresql', 'python','ruby','jenkins','jenkins-client','mysql'] OSE Default : ['cron','diy','haproxy','mongodb','nodejs','perl', 'php','postgresql','python','ruby','jenkins', 'jenkins-client','mysql'],

Default in OpenShift Enterprise: ['cron','diy','haproxy','mongodb','nodejs','perl','php', 'postgresql','python','ruby','jenkins','jenkins-client', 'jbossews','mysql'],

List of cartridge recommended dependencies to be installed on the node. Options:

  • all not available in OpenShift Enterprise

  • diy not available in OpenShift Enterprise

  • jbossas not available in OpenShift Enterprise

  • jbosseap requires OpenShift Enterprise JBoss EAP add-ons

  • jbossews

  • nodejs

  • perl

  • php

  • python

  • ruby

Default: ['all'] Default in OpenShift Enterprise : ['jbossews','nodejs','perl','php','python','ruby']

install_cartridges_optional_deps

List of cartridge optional dependencies to be installed on the node. Options:

  • all not available in OpenShift Enterprise

  • diy not available in OpenShift Enterprise

  • jbossas not available in OpenShift Enterprise

  • jbosseap requires OpenShift Enterprise JBoss EAP add-ons

  • jbossews

  • nodejs

  • perl

  • php

  • python

  • ruby

Default: undef

update_network_conf_files

Indicate whether or not this module will configure resolv.conf and network for you.

Default: true

ose_version

Set this to the X.Y (ie: 2.2) version of Openshift Enterprise to ensure an Openshift Enterprise supported configuration is used.

See README_OSE.asciidoc distributed with the openshift_origin puppet module for more details.

Default: undef

ose_unsupported

Set this to true in order to allow Openshift Enterprise unsupported configurations. Only appropriate for proof of concept environments.

This parameter is only used when ose_version is set.

Default: false

quickstarts_json

JSON content to be deployed into /etc/openshift/quickstarts.json

Default: undef, which on Origin will deploy the contents of templates/broker/quickstarts.json.erb

OSE Default: undef and will not deploy any quickstarts

Manual Tasks

This script attempts to automate as many tasks as it reasonably can. Unfortunately, it is constrained to setting up only a single host at a time. In an assumed multi-host setup, you will need to do the following after the script has completed.

  1. Set up DNS entries for hosts.

    If you installed BIND with the script, then any other components installed with the script on the same host received DNS entries. Other hosts must all be defined manually, including at least your node hosts. oo-register-dns may prove useful for this.

  2. Copy public rsync key to enable moving gears.

    The broker rsync public key needs to go on nodes, but there is no good way to script that generically. Nodes should not have password-less access to brokers to copy the .pub key, so this must be performed manually on each node host:

    # scp root@broker:/etc/openshift/rsync_id_rsa.pub /root/.ssh/
    (above step will ask for the root password of the broker machine)
    # cat /root/.ssh/rsync_id_rsa.pub >> /root/.ssh/authorized_keys
    # rm /root/.ssh/rsync_id_rsa.pub

    If you skip this, each gear move will require typing root passwords for each of the node hosts involved.

  3. Copy ssh host keys between the node hosts.

    All node hosts should identify as the same host, so that when gears are moved between hosts, ssh and git don’t give developers spurious warnings about the host keys changing. So, copy /etc/ssh/ssh_* from one node host to all the rest (or, if using the same image for all hosts, just keep the keys from the image).