Previous page

Next page

Locate page in Contents

Print this page

Configuring Shared Storages on GFS

Setting up a shared data storage located on a partition formatted with GFS v2 is supported for virtual machines only and includes the following steps:

  1. Configuring the data storage for the first node in the cluster.
  2. Configuring the data storage for all the other nodes in the cluster.

Configuring the Data Storage for the First Node in the Cluster

To configure the shared data storage for the first node in the cluster, do the following:

  1. Log in to any of your cluster nodes.
  2. Use standard Linux tools, such as Logical Volume Manager, to set up a logical volume (for example, /dev/vg01/lv01 ) on your data storage. This logical volume will host the /vz partition. Please notice that one logical volume is required for each Red Hat GFS file system.

    Note : If you are going to use Logical Volume Manager (LVM) for creating logical volumes, make sure that it is configured with the clustered locking support. Otherwise, the LVM metadata may become corrupted. For detailed information on LVM and its configuration settings, turn to the LVM documentation and lvm.conf man pages.

    For example:

    # pvcreate /dev/sdb1

    # vgcreate vg01 /dev/sdb1

    # lvcreate -l 100%VG -n lv01 vg01

  3. Create a GFS file system on the logical volume using the gfs_mkfs utility. For example, you can run the following command to do this:

    # gfs2_mkfs -p lock_dlm -t psbmCluster:gfs2 -j 4 /dev/vg01/lv01

    In this example:

    • -p lock_dlm denotes the name of the locking protocol that will be used by the GFS file system. The currently recognized cluster-locking protocols include lock_dlm and lock_nolock .
    • -t psbmCluster:gfs2 denotes the name of the cluster ( psbmCluster ) for which the GFS file system is created and the name that will be assigned to the GFS file system ( gfs2 ).

      Note : Keep in mind that you will need to specify this name when creating a cluster configuration.

    • -j 4 is the number of journals that will be created by the gfs2_mkfs utility. When deciding on the number of journals, keep in mind that one journal is required for each cluster node which is to mount the GFS file system. You can also make additional journals at the time of the GFS file system creation to reserve them for future use.
    • /vg01/lv01 denotes the logical volume where the GFS file system is to be located.

    As a result of the aforementioned command, a new GFS file system with the gfs2 name for the psbmCluster cluster will be created. The file system will use the lock_dlm protocol, contain 4 journals, and reside on the /vg01/lv01 volume.

  4. Make sure that the created logical volumes can be accessed by all servers in the cluster. This ensures that the clustering software can mount the /vz partition that you will create on the logical volume in the next step to any of your cluster nodes.
  5. Tell the node to automatically mount the /vz partition on the node boot. To do this, add the /vz entry to the /etc/fstab file on the node. Assuming that your GFS file system resides on the /vg01/lv01 logical volume, you can add the following entry to the fstab file:

    /dev/vg01/lv01 /vz gfs2 defaults,noatime 0 0

    If you use LVM on a GFS filesystem over a partition provided via the iSCSI protocol, you need to define the extra option _netdev in /etc/fstab in the order LVM tools search for the volumes after network filesystems are initialized.

    /dev/vg01/lv01 /vz gfs2 defaults,noatime,_netdev 0 0

    Also make sure that the netfs service is enabled by default.

    # chkconfig netfs on

  6. Configure the gfs2 service on the node to start in the default runlevel. You can enable the gfs2 service by executing the following command on each of the cluster nodes:

    # chkconfig --level 3 gfs2 on

  7. Enable the cluster mode, and stop the vz , parallels-server , and PVA Agent services:

    # prlsrvctl set --cluster-mode on

    # service vz stop

    # service parallels-server stop

    # service pvaagentd stop

    # service pvapp stop

  8. Move /vz to a temporary directory /vz1 , and create a new /vz directory:

    # mv /vz /vz1; mkdir /vz

    Later on, you will mount a shared data storage located on a GFS volume to the created /vz directory and move there all data from the /vz1 directory.

Configuring the Data Storage for Other Nodes in the Cluster

To configure the shared data storage for the second and all remaining nodes in the cluster, do the following:

  1. Tell each node in the cluster to automatically mount the /vz partition on the node boot. To do this, add the /vz entry to the /etc/fstab file on each node in the cluster. Assuming that your GFS file system resides on the /vg01/lv01 logical volume, you can add the following entry to the fstab file:

    /dev/vg01/lv01 /vz gfs2 defaults,noatime 0 0

  2. Configure the gfs2 service on each node in the cluster to start in the default runlevel. For example, if your system default runlevel is set to 3, you can enable the gfs2 service by executing the following command on each of the cluster nodes:

    # chkconfig --level 3 gfs2 on