https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/W1559b1be149d_43b0_881e_9783f38faaff
Set up IBM® Cloud Private-CE (Community Edition) master, worker, proxy, and optional management nodes in your cluster.
Before you install IBM Cloud Private-CE, prepare your cluster. See Configuring your cluster.
Follow these steps to install IBM Cloud Private-CE master, worker, proxy, and optional management nodes. Run these steps from your boot node. For more information about node types, see the IBM Cloud Private-CE Architecture.
You must log in to the boot node with a user account with root permission to install an IBM Cloud Private-CE cluster.
Set up the installation environment
- Log in to the boot node as a user with root permissions. The boot node is usually your master node. For more information about node types, see Architecture. During installation, you specify the IP addresses for each node type.
Download the IBM Cloud Private-CE installer image.
For Linux® 64-bit, run this command:
sudo docker pull ibmcom/icp-inception:2.1.0.1
Copy
For Linux® on Power® 64-bit LE, run this command:
sudo docker pull ibmcom/icp-inception-ppc64le:2.1.0.1
Copy
Create an installation directory to store the IBM Cloud Private configuration files in and change to that directory. For example, to store the configuration files in
/opt/ibm-cloud-private-ce-2.1.0.1
, run the following commands:mkdir /opt/ibm-cloud-private-ce-2.1.0.1; \ cd /opt/ibm-cloud-private-ce-2.1.0.1
Copy
Extract the configuration files.
For Linux® 64-bit, run this command:
sudo docker run -e LICENSE=accept \ -v "$(pwd)":/data ibmcom/icp-inception:2.1.0.1 cp -r cluster /data
Copy
For Linux® on Power® 64-bit LE, run this command:
sudo docker run -e LICENSE=accept \ -v "$(pwd)":/data ibmcom/icp-inception-ppc64le:2.1.0.1 cp -r cluster /data
Copy
A cluster directory is created inside your installation directory. For example, if your installation directory is
/opt
, the/opt/cluster
folder is created. The cluster directory contains the following files:config.yaml: The configuration settings that are used to install IBM Cloud Private to your cluster.
hosts: The definition of the nodes in your cluster.
misc/storage_class: A folder that contains the dynamic storage class definitions for your cluster.
ssh_key: A placeholder file for the SSH private key that is used to communicate with other nodes in the cluster.
- Create a secure connection from the boot node to all other nodes in your cluster. Complete one of the following processes:
- Set up SSH in your cluster. See Sharing SSH keys among cluster nodes.
- Set up password authentication in your cluster. See Configuring password authentication for cluster nodes.
Add the IP address of each node in the cluster to the
/<installation_directory>/cluster/hosts
file. See Setting the node roles in the hosts file.Note: Worker nodes can support mixed architectures. You can add worker nodes into a single cluster that run on Linux® 64-bit, Linux® on Power® 64-bit LE and IBM® Z platforms.
If you use SSH keys to secure your cluster, in the
/<installation_directory>/cluster
folder, replace thessh_key
file with the private key file that is used to communicate with the other cluster nodes. See Sharing SSH keys among cluster nodes. Run this command:sudo cp ~/.ssh/id_rsa ./cluster/ssh_key
Copy
In this example,
~/.ssh/id_rsa
is the location and name of the private key file.
Customize your cluster
You can complete most of your cluster customization in the /<installation_directory>/cluster/config.yaml
file. To review a full list of parameters that are available to customize, see Customizing the cluster with the config.yaml file.
You can also set node-specific parameters values in the /<installation_directory>/cluster/hosts
file. However, parameter values that are set in the config.yaml
file have the highest priority during an installation. To set a parameter value in the hosts
file, you must remove the parameter from the config.yaml
file. For more information about setting node-specific parameter values in the hosts file, see Setting the node roles in the hosts file.
In an environment that has multiple network interfaces (NICs), such as OpenStack and AWS, ensure that you add the following code to the config.yaml file:
cluster_access_ip: <external IP address>
Copy
calico_ip_autodetection_method: can-reach=<Master node IP address>
Copy
Setting the
calico_ip_autodetection_method
parameter is required only if you are setting up a Calico network.
For more information about network settings, see Table 4: Network settings.(Optional) Configure the monitoring service. See Configuring the monitoring service.
- (Optional) Specify a certificate authority (CA) for your cluster. See Specifying your own certificate authority (CA) for IBM Cloud Private services.
- (Optional) Set up a federation. See Table 8: Federation settings. This feature is available as a technology preview only.
- (Optional) Provision GlusterFS storage on worker nodes. See GlusterFS storage.
- (Optional) Configure vSphere Cloud Provider. See Configuring a vSphere Cloud Provider.
- (Optional) Create one or more storage classes for the storage provisioners in your environment. See Dynamic storage provisioning.
- Optional) Encrypt cluster data network traffic with IPsec. See Encrypting cluster data network traffic with IPsec.
- Optional) Encrypt the
etcd
key-value store. See Encrypting volumes by using eCryptfs.Note: In IBM Cloud Private Version 2.1.0.1, volume encryption is not supported on Linux® on Power® 64-bit LE. - Optional) Integrate VMware NSX-T 2.0 with IBM Cloud Private-CE cluster nodes. See Integrating VMware NSX-T 2.0 with IBM Cloud Private.
Deploy the environment
Change to the
cluster
folder in your installation directory.cd ./cluster
Copy
Deploy your environment.
Note: By default, the command to deploy your environment is set to deploy 15 nodes at a time. If your cluster has more than 15 nodes, the deployment might take a longer time to finish. If you want to speed up the deployment, you can specify a higher number of nodes to be deployed at a time. Use the argument
-f <number of nodes to deploy>
with the command.For Linux® 64-bit, run this command:
sudo docker run -e LICENSE=accept --net=host \ -t -v "$(pwd)":/installer/cluster \ ibmcom/icp-inception:2.1.0.1 install
Copy
For Linux® on Power® 64-bit LE, run this command:
sudo docker run -e LICENSE=accept --net=host \ -t -v "$(pwd)":/installer/cluster \ ibmcom/icp-inception-ppc64le:2.1.0.1 install
Copy
Verify the status of your installation.
If the installation succeeded, the access information for your cluster is displayed:
UI URL is https://master_ip:8443 , default username/password is admin/admin
In this message, master_ip is the IP address of the master node for your IBM Cloud Private-CE cluster.
Note: If you created your cluster within a private network, use the public IP address for the master node to access the cluster.
- If you encounter errors during installation, see Troubleshooting.
Access your cluster
Access your cluster. From a web browser, browse to the URL for your cluster. For a list of supported browsers, see Supported browsers.
- For more information about accessing your cluster by using the IBM Cloud Private-CE management console from a web browser, see Accessing your IBM Cloud Private cluster by using the management console.
For more information about accessing your cluster by using the Kubernetes command line (kubectl), see Accessing your IBM Cloud Private cluster by using the kubectl CLI.
Note: You might see a
502 Bad Gateway
message when you open a page in the management console shortly after installation. If you do, the NGINX service has not started all components. The pages load after all components start.
Post installation tasks
- Ensure that all the IBM Cloud Private-CE default ports are open. For more information about the default IBM Cloud Private-CE ports, see Default ports.
- Back up the boot node. Copy your
/<installation_directory>/cluster
directory to a more secure location. If you use SSH keys to secure your cluster, ensure that the SSH keys in the backup directory remain in sync.