Product SiteDocumentation Site

Chapter 4. Guest Node Walk-through

Table of Contents

4.1. Configure the Physical Host
4.1.1. Configure Firewall on Host
4.1.2. Install Cluster Software
4.1.3. Configure Corosync
4.1.4. Configure Pacemaker for Remote Node Communication
4.1.5. Verify Cluster Software
4.1.6. Disable STONITH and Quorum
4.1.7. Install Virtualization Software
4.2. Configure the KVM guest
4.2.1. Create Guest
4.2.2. Configure Firewall on Guest
4.2.3. Verify Connectivity
4.2.4. Configure pacemaker_remote
4.2.5. Verify Host Connection to Guest
4.3. Integrate Guest into Cluster
4.3.1. Start the Cluster
4.3.2. Integrate as Guest Node
4.3.3. Starting Resources on KVM Guest
4.3.4. Testing Recovery and Fencing
4.3.5. Accessing Cluster Tools from Guest Node
What this tutorial is: An in-depth walk-through of how to get Pacemaker to manage a KVM guest instance and integrate that guest into the cluster as a guest node.
What this tutorial is not: A realistic deployment scenario. The steps shown here are meant to get users familiar with the concept of guest nodes as quickly as possible.

4.1. Configure the Physical Host

Note

For this example, we will use a single physical host named example-host. A production cluster would likely have multiple physical hosts, in which case you would run the commands here on each one, unless noted otherwise.

4.1.1. Configure Firewall on Host

On the physical host, allow cluster-related services through the local firewall:
# firewall-cmd --permanent --add-service=high-availability
success
# firewall-cmd --reload
success

Note

If you are using iptables directly, or some other firewall solution besides firewalld, simply open the following ports, which can be used by various clustering components: TCP ports 2224, 3121, and 21064, and UDP port 5405.
If you run into any problems during testing, you might want to disable the firewall and SELinux entirely until you have everything working. This may create significant security issues and should not be performed on machines that will be exposed to the outside world, but may be appropriate during development and testing on a protected host.
To disable security measures:
[root@pcmk-1 ~]# setenforce 0
[root@pcmk-1 ~]# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config
[root@pcmk-1 ~]# systemctl mask firewalld.service
[root@pcmk-1 ~]# systemctl stop firewalld.service
[root@pcmk-1 ~]# iptables --flush

4.1.2. Install Cluster Software

# yum install -y pacemaker corosync pcs resource-agents

4.1.3. Configure Corosync

Corosync handles pacemaker’s cluster membership and messaging. The corosync config file is located in /etc/corosync/corosync.conf. That config file must be initialized with information about the cluster nodes before pacemaker can start.
To initialize the corosync config file, execute the following pcs command, replacing the cluster name and hostname as desired:
# pcs cluster setup --force --local --name mycluster example-host

Note

If you have multiple physical hosts, you would execute the setup command on only one host, but list all of them at the end of the command.

4.1.4. Configure Pacemaker for Remote Node Communication

Create a place to hold an authentication key for use with pacemaker_remote:
# mkdir -p --mode=0750 /etc/pacemaker
# chgrp haclient /etc/pacemaker
Generate a key:
# dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1

Note

If you have multiple physical hosts, you would generate the key on only one host, and copy it to the same location on all hosts.

4.1.5. Verify Cluster Software

Start the cluster
# pcs cluster start
Verify corosync membership
# pcs status corosync

Membership information
----------------------
    Nodeid      Votes Name
         1          1 example-host (local)
Verify pacemaker status. At first, the output will look like this:
# pcs status
Cluster name: mycluster
WARNING: no stonith devices and stonith-enabled is not false
Stack: corosync
Current DC: NONE
Last updated: Fri Jan 12 15:18:32 2018
Last change: Fri Jan 12 12:42:21 2018 by root via cibadmin on example-host

1 node configured
0 resources configured

Node example-host: UNCLEAN (offline)

No active resources

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled
After a short amount of time, you should see your host as a single node in the cluster:
# pcs status
Cluster name: mycluster
WARNING: no stonith devices and stonith-enabled is not false
Stack: corosync
Current DC: example-host (version 1.1.16-12.el7_4.5-94ff4df) - partition WITHOUT quorum
Last updated: Fri Jan 12 15:20:05 2018
Last change: Fri Jan 12 12:42:21 2018 by root via cibadmin on example-host

1 node configured
0 resources configured

Online: [ example-host ]

No active resources

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

4.1.6. Disable STONITH and Quorum

Now, enable the cluster to work without quorum or stonith. This is required for the sake of getting this tutorial to work with a single cluster node.
# pcs property set stonith-enabled=false
# pcs property set no-quorum-policy=ignore

Warning

The use of stonith-enabled=false is completely inappropriate for a production cluster. It tells the cluster to simply pretend that failed nodes are safely powered off. Some vendors will refuse to support clusters that have STONITH disabled. We disable STONITH here only to focus the discussion on pacemaker_remote, and to be able to use a single physical host in the example.
Now, the status output should look similar to this:
# pcs status
Cluster name: mycluster
Stack: corosync
Current DC: example-host (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum
Last updated: Fri Jan 12 15:22:49 2018
Last change: Fri Jan 12 15:22:46 2018 by root via cibadmin on example-host

1 node configured
0 resources configured

Online: [ example-host ]

No active resources

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled
Go ahead and stop the cluster for now after verifying everything is in order.
# pcs cluster stop --force

4.1.7. Install Virtualization Software

# yum install -y kvm libvirt qemu-system qemu-kvm bridge-utils virt-manager
# systemctl enable libvirtd.service
Reboot the host.

Note

While KVM is used in this example, any virtualization platform with a Pacemaker resource agent can be used to create a guest node. The resource agent needs only to support usual commands (start, stop, etc.); Pacemaker implements the remote-node meta-attribute, independent of the agent.