Configuring Shared Storages on GFS
Setting up a shared data storage located on a partition formatted with GFS v2 is supported for virtual machines only and includes the following steps:
-
Configuring the data storage for the first node in the cluster.
-
Configuring the data storage for all the other nodes in the cluster.
Configuring the Data Storage for the First Node in the Cluster
To configure the shared data storage for the first node in the cluster, do the following:
-
Log in to any of your cluster nodes.
-
Use standard Linux tools, such as Logical Volume Manager, to set up a logical volume (for example,
/dev/vg01/lv01
) on your data storage. This logical volume will host the
/vz
partition. Please notice that one logical volume is required for each Red Hat GFS file system.
Note
: If you are going to use Logical Volume Manager (LVM) for creating logical volumes, make sure that it is configured with the clustered locking support. Otherwise, the LVM metadata may become corrupted. For detailed information on LVM and its configuration settings, turn to the LVM documentation and
lvm.conf
man pages.
For example:
# pvcreate /dev/sdb1
# vgcreate vg01 /dev/sdb1
# lvcreate -l 100%VG -n lv01 vg01
-
Create a GFS file system on the logical volume using the
gfs_mkfs
utility. For example, you can run the following command to do this:
# gfs2_mkfs -p lock_dlm -t psbmCluster:gfs2 -j 4 /dev/vg01/lv01
In this example:
As a result of the aforementioned command, a new GFS file system with the
gfs2
name for the
psbmCluster
cluster will be created. The file system will use the
lock_dlm
protocol, contain 4 journals, and reside on the
/vg01/lv01
volume.
-
Make sure that the created logical volumes can be accessed by all servers in the cluster. This ensures that the clustering software can mount the
/vz
partition that you will create on the logical volume in the next step to any of your cluster nodes.
-
Tell the node to automatically mount the
/vz
partition on the node boot. To do this, add the
/vz
entry to the
/etc/fstab
file on the node. Assuming that your GFS file system resides on the
/vg01/lv01
logical volume, you can add the following entry to the
fstab
file:
/dev/vg01/lv01 /vz gfs2 defaults,noatime 0 0
If you use LVM on a GFS filesystem over a partition provided via the iSCSI protocol, you need to define the extra option
_netdev
in
/etc/fstab
in the order LVM tools search for the volumes after network filesystems are initialized.
/dev/vg01/lv01 /vz gfs2 defaults,noatime,_netdev 0 0
Also make sure that the
netfs
service is enabled by default.
# chkconfig netfs on
-
Configure the
gfs2
service on the node to start in the default runlevel. You can enable the
gfs2
service by executing the following command on each of the cluster nodes:
# chkconfig --level 3 gfs2 on
-
Enable the cluster mode, and stop the
vz
,
parallels-server
, and PVA Agent services:
# prlsrvctl set --cluster-mode on
# service vz stop
# service parallels-server stop
# service pvaagentd stop
# service pvapp stop
-
Move
/vz
to a temporary directory
/vz1
, and create a new
/vz
directory:
# mv /vz /vz1; mkdir /vz
Later on, you will mount a shared data storage located on a GFS volume to the created
/vz
directory and move there all data from the
/vz1
directory.
Configuring the Data Storage for Other Nodes in the Cluster
To configure the shared data storage for the second and all remaining nodes in the cluster, do the following:
-
Tell each node in the cluster to automatically mount the
/vz
partition on the node boot. To do this, add the
/vz
entry to the
/etc/fstab
file on each node in the cluster. Assuming that your GFS file system resides on the
/vg01/lv01
logical volume, you can add the following entry to the
fstab
file:
/dev/vg01/lv01 /vz gfs2 defaults,noatime 0 0
-
Configure the
gfs2
service on each node in the cluster to start in the default runlevel. For example, if your system default runlevel is set to 3, you can enable the
gfs2
service by executing the following command on each of the cluster nodes:
# chkconfig --level 3 gfs2 on
|