New Openfiler for the VMwire lab

Openfiler is a very cool NAS/SAN product that is completely free. I just purchased some brand new SATA III controllers and disks so I’ve decided to migrate my storage from Freenas running on my HP Microserver to my old Dell PE SC440. Doing so will free up the HP Microserver to run vSphere and allow me to expand my current lab capacity of 1.7Tb to a whopping 7.4Tb. This article details my configuration steps and acts as a means for me to document my VMwire lab and I thought that others may benefit from my experience.

Hardware

 

Hardware

Make/Model

Details

Server

Dell PowerEdge SC440

Intel Pentium Dual Core E2180 2.0GHz, 8Gb RAM

Networking

Intel Corporation 82546EB Gigabit Ethernet PCI-X Dual Port Controller

Installed in PCI 33MHz slot

Embedded Broadcom Corporation NetXtreme BCM5754 Gigabit Ethernet PCI Express Controller

Onboard

SATA III Controllers

2 x ASUS S3U6 Dual Port SATA III and Dual Port USB 3.0 PCI-E 2.0 x4 Contollers

One installed in PCI-E 2.0 x4 slot and the other in PCI-E 2.0 x8 slot

1 x HighPoint RocketRAID 620 Dual Port PCI-E 2.0 x1 SATA III Controller

Installed in PCI-E 2.0 x1 slot

Fibre Channel Controller

1 x Brocade 2340 Single Port 2Gb Fibre Channel PCI-X Adapter

Installed in PCI 33MHz slot

Boot Device

Kingston DataTraveller+ 2Gb USB Key

Openfiler boot device

Storage Disks

5 x Seagate ST2000DL03-9VT1 2Tb 5400RPM 64Mb Cache SATA III Disks

 

Installed OS

Openfiler NAS/SAN

2.6.26.8-1.0.11.smp.gcc3.4.x86_64 (SMP)

 

The Total SATA 3 ports is 6, plus an additional 4 SATA 2 ports from the onboard adapter.

I have disabled all the SATA 2 ports from within the BIOS as I will only be using the SATA 3 ports from the three Add-in cards to support my 5 Seagate disks in a RAID-5 aggregate.

Overview of the Openfiler Setup

I have already setup my Openfiler Server by following the instructions from https://vmwire.com/2011/06/12/how-to-install-and-run-openfiler-on-a-usb-key/ to install Openfiler onto the Kingston DataTraveller+ 2Gb USB Key. Networking is setup and everything is working nicely ready for the configuration of the new RAID-5 Array.

Below is a screenshot of the Openfiler Hardware Information.

Setting up the Software RAID

I’m using software RAID due to the fact that SATA III RAID Controllers with 5 or more ports are currently still very expensive. The HighPoint RocketRaid 630 and the ASUS S6U3 Controllers are relatively cheap and can be picked up for less than £20 each.

  1. You can view the list of block devices available to use on Openfiler by navigating to Volumes | Block Devices.

Openfiler has successfully picked up the 5 new disks and we can now manage them.

To set up the Software RAID we must first partition the disks as “RAID array member” partitions.

To do this you must click in each of the block devices and create a “RAID array member” partition. Remember to leave /dev/sda alone as this is the boot device.

  1. Clicking on /dev/sdb brings up the partitioning page.
  2. Select RAID array member from the Partition Type drop down menu and then just click on Create.

Continue creating the partitions on the remaining four disks by repeating Step 1.

Once complete the Block Device Management should look something like this.

Now we are ready to create the RAID Array. To do this we need to navigate to Volumes | Software RAID and create the array.

I’m going to create a RAID-5 (parity) array with a 64kB chunk size.

The Seagate disks only have about 1.82Tb usable capacity, so a RAID-5 array with 5 disks would give a total of 7.4Tb usable capacity.

Setting up the Volume Group

We can create the Volume Group by navigating to Volumes | Volume Groups.

I’ve decided to create a single volume group for all my services – CIFS, NFS, iSCSI and FC. We will then create Volumes from this single Volume Group to carve up the storage. The resulting Volume Group Management page will then look something like this.

That’s our Volume Group setup complete. You would never need to revisit the Volume Group Management page ever again, unless you either rebuild your disks or create new Volume Groups from new disks. For reference the path for this Volume Group is /dev/md0/volg_raid5/.

Setting up a new Volume

Now we can start carving up our 7.4Tb usable capacity by creating Volumes and then assigning these Volumes to specific services.

We can manage this by using the Add Volume page. I’m going to add a new 1Tb NFS volume for my VMware virtual machines.

To do this navigate to Volumes | Add Volume.

Upon creating this new Volume, the path for this Volume will then be /dev/md0/volg_raid5/nfs01/.

Setting up a new Sub-folder on a Volume

Now that we have created a Volume, we can now create a Sub-folder in which we can make a Share. I’m going with a 1:1 allocation of Volume to Sub-folder in my lab, so the naming conventions reflect this.

To do this, navigate to the Shares page. The screen should be similar to this.

We can create the new Share by clicking on the VMware Virtual Machines link.

Now we can create a new sub-folder, I’m going to call mine nfs01-vm01.

The Shares page will now display the following.

Setting up a new Share

To create the actual nfs01-vm01 NFS share that is mounted on our vSphere servers we need to make the share from the Sub-folder created in “Setting up a new Sub-folder from a Volume”.

  1. Click on the nfs01-vm01 link and then click on the Make Share button.
  2. Select the “Public guest access” radio button and click on Update.

  1. Now scroll to the bottom of the page and click on the “RW” radio button underneath the NFS column, and then click on Update.

Note that you need to have setup network access configuration prior to doing any of these steps. This is not covered in this guide.

Again, for reference this nfs01-vm01 NFS share for use by VMware vSphere hosts has a path of /mnt/volg_raid5/nfs01/nfs01-vm01/.

Mounting the NFS share

The NFS volume is now ready to be mounted by VMware vSphere hosts. Just use /mnt/volg_raid5/nfs01/nfs01-vm01 as the “Folder” mount point.

Happy days!

[NEW]

Setting up a new Volume for CIFS

Now I can start carving up my remaining 6.4Tb usable capacity by creating Volumes and then assigning these Volumes to specific services. For this part I’m going to create a new Volume for CIFS.

We can manage this by using the Add Volume page. I’m going to add a new 3Tb CIFS volume for my Windows file storage.

To do this navigate to Volumes | Add Volume.

Upon creating this new Volume, the path for this Volume will then be /dev/md0/volg_raid5/cifs01/.

Setting up a new Sub-folder on a Volume for CIFS

Now that we have created a new Volume for CIFS, we can now create a Sub-folder in which we can make a Share. I’m going with a 1:1 allocation of Volume to Sub-folder in my lab, so the naming conventions reflect this.

To do this, navigate to the Shares page. The screen should be similar to this.

We can create the new Share by clicking on the Windows File Share.

Now we can create a new sub-folder, I’m going to call mine cifs01-smb01.

The Shares page will now display the following.

Setting up a new Share for CIFS

To create the actual cifs01-smb01 CIFS share that can be used for SMB file storage we need to make the share from the Sub-folder created in “Setting up a new Sub-folder on a Volume for CIFS”.

  1. Click on the cifs01-smb01
    link and then click on the Make Share button.

  1. Select the “Controlled access” radio button and click on Update.

  1. Now scroll to the Group access configuration section in the middle and select the primary group for this CIFS share and also set Read or Read/Write Permissions for this share based on your Active Directory Groups. I’ve already created a Security Group called “openfiler cifs users” with which my AD account VIRTUAL\Hugo.Phan is a member of. Once done click Update.

    Note that to use Active Directory authentication, you must set up Authentication under the Accounts section. I’ve written a guide here https://vmwire.com/2011/06/13/how-to-setup-active-directory-authenticaton-on-openfiler/.

  1. Now scroll all the way down to the bottom and configure the Host access configuration to the /mnt/volg_raid5/cifs01/cifs01-smb01/ Volume. Choose the options relevant to your environment, I’m making this share R/W accessible by all my devices in my network – 192.168.200.0/24. Then just click on Update to finish.

 

Note that you need to have setup network access configuration prior to doing any of these steps. This is not covered in this guide.

Again, for reference this cifs01-smb01 CIFS share for use as network file storage over the network has a path of /mnt/volg_raid5/cifs01/cifs01-smb01/.

Connecting to the SMB Share

The SMB share is now ready to be used by client computers. To connect just open a UNC path to it. of.virtual.local is the FQDN of my Openfiler Server.

Now enter your domain credentials.

Job done!

Creating a VMware vCloud Director Cluster

Overview

A VMware vCloud Director (vCD) cluster contains one or more vCD servers, these servers are referred to as “Cells” and form the basis of the VMware cloud.  A cloud can be formed of multiple cells. 

This diagram is a good representation of the vCD Cluster concept.

To enable multiple servers to participate in a cluster, the same pre-requisites exist for a single host as for multiple hosts but the following must be met:

  • each host must mount the shared transfer server storage at $VCLOUD_HOME/data/transfer, this is typically located in /opt/vmware/cloud-director/data/transfer.

This shared storage could be a NFS mount, mounted to all participating servers with rw access for root.  It is important that prior to configuring the first server, a decision must be made on whether a cluster is required.  If you intend to use a vCD Cluster, configure the shared transfer server storage before executing the vCD installer.

Check out the vCloud Director Installation and Configuration Guide for pre-requisites.

Shared Transfer Server Storage

For this post, I’ve setup an NFS volume on Freenas and given rw permissions for all cluster members to the volume.  It is assummed that you have a completely clean installation of RHEL 5 x64 (or if like me you are running this in a lab CENTOS 5 x64), with all the latest updates and pre-requisite packages.

Now to mount the volume on all hosts:

  1. Connect to your first host using SSH or login directly
  2. Edit your /etc/fstab file and add the following line remembering to change to your NFS server and relevant mount point
  3. vcd-freenas.vmwire.local:/mnt/SSD /opt/vmware/cloud-director/data/transfer nfs rw,soft,_netdev  0 0
  4. The resulting /etc/fstab should look something like this:
  5. /etc/fstab

    /etc/fstab

  6. Now create the shared transfer server storage folder structure, /opt/vmware/cloud-director/data/transfer (just do a mkdir command)
  7. run chkconfig netfs on
  8. Repeat steps 1-6 for any other hosts
  9. Restart servers

 Now you are ready to install vCD onto the first host, making sure that you have met all the pre-requisites as detailed in the vCloud Director Installation and Configuration Guide.  Once completed you should have a working cell with its shared transfer server storage folder located on the NFS volume.

Setting up a second cell as part of the Cloud Director Cluster

At this point you should already have a working cell with the vCD shared transfer server storage located on the NFS volume.  Before you install vCD onto a server the following must be done:

  1. All pre-requisites for a single server installation must also be met for subsequent servers as part of a vCD Cluster
  2. The second server must also have rw access for root to the shared transfer server storage
  3. The second server must have access to the response file, this file is located in /opt/vmware/cloud-director/etc/responses.properties on the first successfully installed server
  4. Copy the above file to the second server or to the shared transfer server storage
  5. It is important to note that the response file contains values that were used for the first server.  Subsequent servers will use the response file, and as such if you stored your certificates.ks file for the first server in a location not recognised by subsequent servers, you will be prompted by the installation script to enter the correct path to the certificates.ks file for any subsequent servers.  To avoid this, you could create all the certificates.ks files for all cluster members and place them in the shared transfer server storage, with of course unique names such as vcd-cell1-certificates.ks and vcd-cell2-certificates.ks.
  6. You can now install vCD onto subsequent servers with the command vmware-cloud-director-1.0.0-285979.bin -r /opt/vmware/cloud-director/data/transfer/responses.properties

The installer will automatically complete most prompts for you, but you will still need to select the correct eth adapter for the http and consoleproxy services, everything else will be automatic.

Go ahead and have a play and maybe even deploy a load balancer on top.

Here’s a screenshot of my two cells working side by side connecting to the same shared transfer server storage, oracle database and managing the same vCenters.

For more information read the overview at Yellow Bricks which also includes links to the product pages.