Openfiler is a very cool NAS/SAN product that is completely free. I just purchased some brand new SATA III controllers and disks so I’ve decided to migrate my storage from Freenas running on my HP Microserver to my old Dell PE SC440. Doing so will free up the HP Microserver to run vSphere and allow me to expand my current lab capacity of 1.7Tb to a whopping 7.4Tb. This article details my configuration steps and acts as a means for me to document my VMwire lab and I thought that others may benefit from my experience.
Hardware
Hardware
|
Make/Model
|
Details
|
Server
|
Dell PowerEdge SC440
|
Intel Pentium Dual Core E2180 2.0GHz, 8Gb RAM
|
Networking
|
Intel Corporation 82546EB Gigabit Ethernet PCI-X Dual Port Controller
|
Installed in PCI 33MHz slot
|
Embedded Broadcom Corporation NetXtreme BCM5754 Gigabit Ethernet PCI Express Controller
|
Onboard
|
SATA III Controllers
|
2 x ASUS S3U6 Dual Port SATA III and Dual Port USB 3.0 PCI-E 2.0 x4 Contollers
|
One installed in PCI-E 2.0 x4 slot and the other in PCI-E 2.0 x8 slot
|
1 x HighPoint RocketRAID 620 Dual Port PCI-E 2.0 x1 SATA III Controller
|
Installed in PCI-E 2.0 x1 slot
|
Fibre Channel Controller
|
1 x Brocade 2340 Single Port 2Gb Fibre Channel PCI-X Adapter
|
Installed in PCI 33MHz slot
|
Boot Device
|
Kingston DataTraveller+ 2Gb USB Key
|
Openfiler boot device
|
Storage Disks
|
5 x Seagate ST2000DL03-9VT1 2Tb 5400RPM 64Mb Cache SATA III Disks
|
|
Installed OS
|
Openfiler NAS/SAN
|
2.6.26.8-1.0.11.smp.gcc3.4.x86_64 (SMP)
|
The Total SATA 3 ports is 6, plus an additional 4 SATA 2 ports from the onboard adapter.
I have disabled all the SATA 2 ports from within the BIOS as I will only be using the SATA 3 ports from the three Add-in cards to support my 5 Seagate disks in a RAID-5 aggregate.
Overview of the Openfiler Setup
I have already setup my Openfiler Server by following the instructions from https://vmwire.com/2011/06/12/how-to-install-and-run-openfiler-on-a-usb-key/ to install Openfiler onto the Kingston DataTraveller+ 2Gb USB Key. Networking is setup and everything is working nicely ready for the configuration of the new RAID-5 Array.
Below is a screenshot of the Openfiler Hardware Information.
Setting up the Software RAID
I’m using software RAID due to the fact that SATA III RAID Controllers with 5 or more ports are currently still very expensive. The HighPoint RocketRaid 630 and the ASUS S6U3 Controllers are relatively cheap and can be picked up for less than £20 each.
- You can view the list of block devices available to use on Openfiler by navigating to Volumes | Block Devices.
Openfiler has successfully picked up the 5 new disks and we can now manage them.
To set up the Software RAID we must first partition the disks as “RAID array member” partitions.
To do this you must click in each of the block devices and create a “RAID array member” partition. Remember to leave /dev/sda alone as this is the boot device.
- Clicking on /dev/sdb brings up the partitioning page.
- Select RAID array member from the Partition Type drop down menu and then just click on Create.
Continue creating the partitions on the remaining four disks by repeating Step 1.
Once complete the Block Device Management should look something like this.
Now we are ready to create the RAID Array. To do this we need to navigate to Volumes | Software RAID and create the array.
I’m going to create a RAID-5 (parity) array with a 64kB chunk size.
The Seagate disks only have about 1.82Tb usable capacity, so a RAID-5 array with 5 disks would give a total of 7.4Tb usable capacity.
Setting up the Volume Group
We can create the Volume Group by navigating to Volumes | Volume Groups.
I’ve decided to create a single volume group for all my services – CIFS, NFS, iSCSI and FC. We will then create Volumes from this single Volume Group to carve up the storage. The resulting Volume Group Management page will then look something like this.
That’s our Volume Group setup complete. You would never need to revisit the Volume Group Management page ever again, unless you either rebuild your disks or create new Volume Groups from new disks. For reference the path for this Volume Group is /dev/md0/volg_raid5/.
Setting up a new Volume
Now we can start carving up our 7.4Tb usable capacity by creating Volumes and then assigning these Volumes to specific services.
We can manage this by using the Add Volume page. I’m going to add a new 1Tb NFS volume for my VMware virtual machines.
To do this navigate to Volumes | Add Volume.
Upon creating this new Volume, the path for this Volume will then be /dev/md0/volg_raid5/nfs01/.
Setting up a new Sub-folder on a Volume
Now that we have created a Volume, we can now create a Sub-folder in which we can make a Share. I’m going with a 1:1 allocation of Volume to Sub-folder in my lab, so the naming conventions reflect this.
To do this, navigate to the Shares page. The screen should be similar to this.
We can create the new Share by clicking on the VMware Virtual Machines link.
Now we can create a new sub-folder, I’m going to call mine nfs01-vm01.
The Shares page will now display the following.
Setting up a new Share
To create the actual nfs01-vm01 NFS share that is mounted on our vSphere servers we need to make the share from the Sub-folder created in “Setting up a new Sub-folder from a Volume”.
- Click on the nfs01-vm01 link and then click on the Make Share button.
- Select the “Public guest access” radio button and click on Update.
- Now scroll to the bottom of the page and click on the “RW” radio button underneath the NFS column, and then click on Update.
Note that you need to have setup network access configuration prior to doing any of these steps. This is not covered in this guide.
Again, for reference this nfs01-vm01 NFS share for use by VMware vSphere hosts has a path of /mnt/volg_raid5/nfs01/nfs01-vm01/.
Mounting the NFS share
The NFS volume is now ready to be mounted by VMware vSphere hosts. Just use /mnt/volg_raid5/nfs01/nfs01-vm01 as the “Folder” mount point.
Happy days!
[NEW]
Setting up a new Volume for CIFS
Now I can start carving up my remaining 6.4Tb usable capacity by creating Volumes and then assigning these Volumes to specific services. For this part I’m going to create a new Volume for CIFS.
We can manage this by using the Add Volume page. I’m going to add a new 3Tb CIFS volume for my Windows file storage.
To do this navigate to Volumes | Add Volume.
Upon creating this new Volume, the path for this Volume will then be /dev/md0/volg_raid5/cifs01/.
Setting up a new Sub-folder on a Volume for CIFS
Now that we have created a new Volume for CIFS, we can now create a Sub-folder in which we can make a Share. I’m going with a 1:1 allocation of Volume to Sub-folder in my lab, so the naming conventions reflect this.
To do this, navigate to the Shares page. The screen should be similar to this.
We can create the new Share by clicking on the Windows File Share.
Now we can create a new sub-folder, I’m going to call mine cifs01-smb01.
The Shares page will now display the following.
Setting up a new Share for CIFS
To create the actual cifs01-smb01 CIFS share that can be used for SMB file storage we need to make the share from the Sub-folder created in “Setting up a new Sub-folder on a Volume for CIFS”.
- Click on the cifs01-smb01
link and then click on the Make Share button.
- Select the “Controlled access” radio button and click on Update.
- Now scroll to the Group access configuration section in the middle and select the primary group for this CIFS share and also set Read or Read/Write Permissions for this share based on your Active Directory Groups. I’ve already created a Security Group called “openfiler cifs users” with which my AD account VIRTUAL\Hugo.Phan is a member of. Once done click Update.
Note that to use Active Directory authentication, you must set up Authentication under the Accounts section. I’ve written a guide here https://vmwire.com/2011/06/13/how-to-setup-active-directory-authenticaton-on-openfiler/.
- Now scroll all the way down to the bottom and configure the Host access configuration to the /mnt/volg_raid5/cifs01/cifs01-smb01/ Volume. Choose the options relevant to your environment, I’m making this share R/W accessible by all my devices in my network – 192.168.200.0/24. Then just click on Update to finish.
Note that you need to have setup network access configuration prior to doing any of these steps. This is not covered in this guide.
Again, for reference this cifs01-smb01 CIFS share for use as network file storage over the network has a path of /mnt/volg_raid5/cifs01/cifs01-smb01/.
Connecting to the SMB Share
The SMB share is now ready to be used by client computers. To connect just open a UNC path to it. of.virtual.local is the FQDN of my Openfiler Server.
Now enter your domain credentials.
Job done!