Creating and Configuring a Data Sharing Cluster
Before creating and configuring a data sharing cluster, enable the cluster mode on all cluster nodes and make sure that the
vz
and
parallels-server
services are stopped:
# prlsrvctl set --cluster-mode on
# service vz stop
# service parallels-server stop
Creating the Cluster
Now you can start creating the data sharing cluster. The example below demonstrates how to set up a new cluster using the
ccs_tool
command-line tool. Using this tool, you can create two types of data sharing clusters:
-
Active/passive clusters
. An active/passive cluster includes both active and passive nodes. In this type of cluster, a passive node is used only if one of the active nodes fails.
-
Active/active clusters
. An active/active cluster consists of active nodes only, each running the Parallels Server Bare Metal software and hosting a number of virtual machines and Containers. In the event of a failover, all virtual machines and Containers running on the problem node are failed over to one of the healthy active nodes.
The process of creating both types of clusters is almost identical and is described below:
-
Log in to any of your cluster nodes, and create a configuration file for the cluster, for example:
# ccs_tool create psbmCluster
This command creates a new configuration file for the
psbmCluster
cluster with the default path
/etc/cluster/cluster.conf
.
-
Set fence devices for the cluster. The example below uses the
apc
network power switch as the fencing device:
# ccs_tool addfence apc fence_apc ipaddr=apc.server.com login="user1" passwd="XXXXXXXX"
For detailed information on fence device parameters, see the
Cluster Administration
guide at
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux
.
Note:
Manual fencing is supported for testing purposes only and is not recommended for use in production
-
Add all Parallels servers to the cluster. For example, to add the
psbmNode
server to the cluster, execute this command:
# ccs_tool addnode psbmNode1 -n 1 -v 1 -f apc port=1
where
-
-n
specifies the ID that will uniquely identify the node among other nodes in the cluster. Each node in the cluster must have its own unique port ID.
-
-v
denotes the number of votes for the
psbmNode
server. You can use 1 as the default vote number and change it later on, if necessary.
-
apc port=1
is the name of the fencing device (APC) and the unique port ID of the APC switch power for the
psbmNode
server. Each node in the cluster must have its own unique port ID.
Run the
ccs_tool addnode
command for each node you want to add to the cluster.
-
Create cluster resources. For a data sharing cluster, you need to create two resources—script and IP address.
-
Script
. One script per cluster is required. The following command creates the script resource
vzservice
:
# ccs_tool addscript vzservice /etc/init.d/vz-cluster
When creating scripts, just replace
vzservice
with your own name.
-
IP address
. An IP address is required for each pair of
vz
and
parallels-server
services. This IP address is used for a direct SSH connection to the server. Note that the IP address will be managed by the cluster and, therefore, must not be already in use or assigned to any node. For example:
# ccs_tool addip 10.10.10.111
-
Create and configure failover domains. You need to create one failover domain per cluster service managed by the cluster and configure the list of cluster nodes that will be able to run cluster services from these domains. For example, you can run the following command to create the failover domain
domain1
and add the nodes
psbmNode1
and
psbmNode2
to this domain:
# ccs_tool addfdomain domain1 psbmNode1 psbmNode2
-
Create clustered services. The number of services must correspond to the number of active servers and shared partitions. When creating a clustered service, do the following:
-
Enable the service autostart.
-
Configure the service operation mode:
-
For an active/passive cluster, configure the service to run exclusively. This will prevent the cluster from trying to run more than one pair of
vz
and
parallels-server
services on the same physical server.
-
For an active/active cluster, make sure that the service is configured not to run exclusively so that more than one
vz
service will be able to run on the same node.
-
Set the service recovery policy to
relocate
or
restart
. In the latter case, if the
vz
and
parallels-server
services are stopped for some reason, the cluster will try to restart these services on the same server before relocating it to another one.
-
Specify the IP address resource for the service.
-
Specify the proper failover domain.
-
Specify the script resource for the service.
-
Specify the name to assign to the service.
For example:
# ccs_tool addservice -a 1 -x 1 -r restart -i 10.10.10.200 -d domain1 -s vzservice pservice1
This command creates the clustered service
pservice1
; enables its autostart (
-a 1
); configures the service to run exclusively (
-x 1
); sets its recovery policy to
restart
(
-r restart
); assigns IP address
10.10.10.200
to the service (
-i 10.10.10.200
); and associates the service with the failover domain
domain1
(
-d domain1
) and the script
vzservice
(
-s vzservice
).
Configuring the Cluster
Once you create the cluster configuration, distribute the configuration file (
/etc/cluster/cluster.conf
) to all cluster nodes, and start the
cman
service on each cluster node one after another.
# service cman start
Once the
cman
service successfully starts on all cluster nodes, complete the following tasks on each node:
-
Start the
gfs2
service:
# service gfs2 start
-
Move all the data from the temporary
/vz1
directory to
/vz
, and then remove
/vz1
:
# mv /vz1/* /vz/
# rm -rf /vz1
-
Start the
rgmanager
service:
# service rgmanager start
-
Configure the clustering service to start in the default runlevel. For example, if your system default runlevel is set to 3, you can enable the service by executing the following commands on each of the cluster nodes:
# chkconfig --level 3 cman on
# chkconfig --level 3 rgmanager on
Once you perform the operations above, use the
clustat
utility (you can run it on any cluster node) to make sure that all the services have been successfully started. If they have not, investigate the cluster logs stored in
/var/log/messages
by default. Keep in mind that the information you are looking for may be placed on different servers in the cluster.
Configuring Parallels Virtual Automation
If you plan to use the Parallels Virtual Automation application for managing your cluster nodes:
-
On each cluster node, start the PVA Agent services, and configure them to automatically start when you restart the nodes:
# service pvaagentd start
# service pvapp start
# chkconfig pvaagentd on
# chkconfig pvapp on
-
Log in to the Parallels Virtual Automation Management Node.
-
Register all cluster nodes one after another. To register a node, use its IP address.
|