Configuring a PowerHA Cluster (7.5)

Background

PowerHA integrates and extends IBM i’s clustering infrastructure. An IBM i cluster is a collection of systems or logical partitions called cluster nodes. The cluster provides the communication infrastructure between these systems and provides the following:

  • Facilitates the execution of cluster events, such as cluster nodes failing.

  • Simplified management with a single point of control. Most PowerHA commands and operations can be performed via any node in the cluster as PowerHA and the cluster facilitate the execution of the operation across the required nodes.

  • Monitoring for network failures via heartbeat monitoring between nodes. With heartbeat monitoring each node that is active sends a signal to every other node in the cluster to convey they are still active. When an active node fails to respond, a network partition occurs and the status of the affected nodes changes to partition. Once communication is restored, this condition is automatically corrected.

  • Nodes can detect OS failures such as PWRDWNSYS, ENDSYS, cluster failures to send distress messages to other nodes, triggering automatic failover of data and applications. Additional failures such as hardware and power failures can be detected by registering for HMC failure events with advanced node failure detection.

A cluster can contain up to 128 nodes, although in most cases clusters consist of between two to six nodes.

Procedure

Start the InetD Server

  1. Clustering requires the *INETD server for cluster startup. Run the following command on all nodes that will be in the cluster: STRTCPSVR SERVER(*INETD).

  2. Since the InetD server must be active to start clustering it is recommended to either change the server to auto start *YES by using CHGTCPSVR SVRSPCVAL(*INETD) AUTOSTART(*YES) or adding STRTCPSVR SERVER(*INETD) to the system startup program.

Allowing an IBM i to be added to a Cluster

For secure by default, IBM i systems are shipped to not allow adding of systems to a cluster. This setting must be changed on all nodes so that a system can be a part of a cluster. There are two options for allowing nodes to be added to a cluster:

  • Recommended: Allowing any system to add this node as a node in a cluster, only if the request is authenticated.

This option uses X.509 digital certificates to verify cluster nodes are trusted before allowing them to be added into the cluster. This requires the following software products are installed on the systems:

  • IBM i Option 34 (Digital Certificate Manager)

  • IBM i Option 35 (CCA Cryptographic Services Provider)

  1. Run the following command: CHGNETA ALWADDCLU(*RQSAUT).

  2. In Digital Certificate Manager, assign a certificate to the QIBM_QCST_CLUSTER_SECURITY application.

  3. If using a self-signed certificate, ensure that the certificate authorities for all nodes are trusted by all the nodes in the cluster.

  4. Repeat steps 1-3 for all nodes in the cluster.

  • Easier Configuration: Allowing any system to add this node as a node in a cluster, with no required authentication.

To allow adding a system as a node in a cluster with no authentication use the following command on all nodes in the cluster: CHGNETA ALWADDCLU(*ANY).

Creating a Cluster

A cluster can be created using one of the following methods:

  • Type the CRTCLU command and press F4 to go to the Create Cluster (CRTCLU) command screen.

  • From the command line, enter the CRTCLU command and your parameters.

For this example, a Cluster is created using the following command:

CRTCLU CLUSTER(MYCLU) NODE((PROD ('192.0.2.110' '192.0.2.111')) (DR ('192.0.2.210' '192.0.2.211')))

This command creates a cluster named MYCLU with two nodes, one named PROD and the second named DR. The names of the nodes do not need to match the names of the systems, however, sometimes it is easier to have the names match. Clustering uses the IP addresses for communication. It is recommended to have multiple IP addresses for redundancy purposes.

Multiple IP Address Rules

All IP addresses on a particular node must be able to communicate with all cluster IP addresses on all nodes in the cluster.

By default the CRTCLU command will do the following:

  • Create the cluster.

  • Attempt to start clustering on all of the nodes in the cluster.

  • Add all nodes into a device domain so that they can share IASP resources. Typically all nodes in a cluster are part of the same device domain.

Manually Adding Nodes to a Device Domain

If a failure occurs when initially starting clustering a node may need to be manually added into a device domain. This can be done using the Add Device Domain Node Entry (ADDDEVDMNE) command. For example: ADDDEVDMNE DEVDMN(DEVDMN) NODE(DR).

Starting Clustering

While clustering is automatically started in this example, clustering must be started every time a system IPL is performed. Typically this means that clustering is started as part of the system startup with a user that has enough authority. Use one of the following methods to start clustering:

  • To start clustering on just the local system use: STRCLUNOD NODE(*).

  • To start clustering first on the local system, and then attempt to start clustering on any other system where it is not active use: STRCLUNOD NODE(*ALL).

  • From the Work with Cluster Menu:

    • Type WRKCLU to get to the Work with Cluster Menu.

    • Type option 6 and press enter to Work with Cluster Nodes.

    • Type option 8 next to the node to start. Clustering must be started on the local node before remote nodes can be started.

  • From the PowerHA Web Interface:

    • Navigate to Cluster Nodes.

    • Right-click on the cluster node to start, and select Start.

Configuring Advanced Node Failure Detection

While many environments do not use automatic failover, if this environment will use automatic failover it is possible to configure advanced node failure detection using cluster monitors to detect additional failures such as hardware and power failures by registering for HMC failure events. See configuring advanced node failure detection for additional information.

Results

A cluster is now created and started. To verify the cluster information use one of the following methods:

  • The Work with Cluster (WRKCLU) menu, followed by option 6 to work with cluster nodes.

  • The Display Cluster Information (DSPCLUINF) command. For example DSPCLUINF.

  • The PowerHA web interface.

  • The SQL Service views: QHASM.CLUSTER_INFO and QHASM.CLUSTER_NODES.

Privacy Policy | Cookie Policy | Impressum
From time to time, this website may contain technical inaccuracies and we do not warrant the accuracy of any posted information.
Copyright © Fortra, LLC and its group of companies. All trademarks and registered trademarks are the property of their respective owners.