//
archives

Archive for

A List of VMware Employee Tweeps (people on Twitter)

Following on from the PSO NEMEA twitter list, I decided to go further and produce this list of VMware employees that are on Twitter, sorted alphabetically by Twitter ID as of 29/06/2011.

Let me know if I have missed you out or you follow someone that works for VMware.

Twitter ID Name Blog
Adrian Roberts  
Alan Renouf www.virtu-al.net
Andrew Mitchell  
Andy Banta  
Arnim van Lieshout www.van-lieshout.com
Kamau Wanguhu www.borgcube.com
Brian Thomas Rice  
Chris Colotti www.chriscolotti.us
Christophe Decanini www.vcoteam.info
Christoph Harding www.thatsmyview.net
Brittany Coulson  
Carter Shanklin  
Dale Carter www.delboycarter.com
Dave Hill www.virtual-blog.com
Douglas Phillips  
Richard Damoser  
Duncan Epping www.yellow-bricks.com
Eric Gray www.vcritical.com
Frank Denneman www.frankdenneman.nl
Frank Wegner  
Hany Michael www.hypervizor.com
Andy Troup  
Stephen Herrod www.vmware.com/company/leadership.html
Pablo Roesch  
Hugo Strydom www.vroem.co.za
Hugo Phan www.vmwire.com
Jean-Francois Richard  
Jerry Chen  
Johnny Krogsboll  
Joe Sarabia  
John Troyer  
Julie Escott  
Greg A Lato www.latogalabs.com
Lode Vermeiren lodev.name
Max Daneri  
Manish Patel  
Mark Verhagen  
Martyn Storey  
Matthew Meyer  
Dave McCrory blog.mccrory.me
Matt Coppinger
Michael Haines  
Mike DiPetrillo www.mikedipetrillo.com
Massimo Re Ferre’ it20.info
Nadyne Richmond www.nadynerichmond.com
Peter Giordano petergiordano.com
Paul Nothard  
Rawlinson Rivera www.punchingclouds.com
Rasmus Jensen www.vpeeling.com
Ray Heffer www.rayheffer.com
Raymon Epping  
Richard McDougall blog.richardmcdougall.com
Rick Blythe www.vmwarewolf.com
Robin Prudholm  
Rob Upham  
Safouh Kharrat  
Scott Davis blogs.vmware.com/view-point
Simon Long www.simonlong.co.uk
Steve Jin www.doublecloud.org
Scott Sauer unhub.com/ssauer
Stan Hutten Czapski  
Susan Gudenkauf  
Burke Azbill www.vcoteam.info
Tedd Fox about.me/teddfox
Richard Garsthagen www.run-virtual.com
John Dodge www.dodgeretort.com
Tom Ralph about.me/TomRalph
Tony Dunn  
Tristan Todd  
Timo Sugliani  
Jason Miles  
John Arrasjid  
Alexander Thoma  
Vegard Sagbakken   
Vic Camacho wefollow.com/Virtual_Vic
Andrew Johnson  
Irfan virtualscoop.org
Todd Muirhead  
Mark C  
Josh Liebster vmsupergenius.com
Vittorio Viarengo journeytocloud.com
Wade Holmes  
Willem van Engeland  
Jian Zhen zhen.org

VMware PSO NEMEA Twitter List

A list of VMware PSO consultants covering NEMEA that are on Twitter, sorted alphabetically by Twitter ID.

Follow us for tweets from the real world.

Twitter ID Name Blog
Adrian Roberts
Arnim van Lieshout www.van-lieshout.com
Didier Pironet deinoscloud.wordpress.com
Frank Denneman www.frankdenneman.nl
Hugo Strydom www.vroem.co.za
Hugo Phan www.vmwire.com
Rasmus Jensen www.vpeeling.com
Ray Heffer www.rayheffer.com
Simon Long www.simonlong.co.uk
Jason Miles

Map created using templates from http://www.presentationmagazine.com.

How to update your Openfiler USB installation for better performance

Previously I wrote an article on How to install and run Openfiler on a USB key. I thought that everything was working fine but eventually found that NFS and CIFS performance was too slow. Upon reading a few forums and stumbling across this thread in particular, the reason was down to Openfiler requiring an update.

I have since tried to update the installation by running conary updateall at the CLI. Unfortunately, this installs an updated kernel (2.6.29.6-0.24.smp.gcc3.4.x86_64 (SMP)) and also a new ramdisk which makes all the hard work from the previous post defunct. This article shows you how to perform the update and then make a new initrd-usb-update.img to work with the new kernel.

So assuming you’ve made a successful USB key using the previous article, continue with the following to update your Openfiler installation and also make the updated Openfiler installation USB key bootable.

Update Openfiler

Let’s first update Openfiler.

  1. Log into the CLI as root
  2. Run

    conary updateall

  3. This will take a while as it downloads around 26 packages and installs them.
  4. Once complete insert the Openfiler CD into your drive and restart your system, making sure that it boots from CD.

Creating a new ramdisk that works with the new kernel

This part is more or less very much similar to the steps in the previous post, there are some minor additions that we need to make, but for completeness I’ve included all the steps here.

  1. Once Openfiler finishes booting from the CD type.

    linux rescue

  2. Go through the menus and select your region and keyboard and skip the automatic mounting of your installed OS. We will do this manually.
  3. When the prompt appears, create a directory to mount the USB key.

    mkdir /mnt/sysimage

  4. Now mount the / of the USB key onto /mnt/sysimage.

    mount /dev/sda2 /mnt/sysimage

    Note: your / partition may be /dev/sda3 instead, depending on how you setup your partitioning during the installation of Openfiler.

  5. Now mount the boot partition of the USB key onto /mnt/sysimage/boot.

    mount /dev/sda1 /mnt/sysimage

    Note: your / partition may be /dev/sda1 instead, depending on how you setup your partitioning during the installation of Openfiler.

  6. Make the /mnt/sysimage your working environment by changing your root location so you are working on the file system on the usb key.

    chroot /mnt/sysimage

  7. Copy the current initrd file to a temporary location where we can work on it.

    cp /boot/initrd-1 /tmp/initrd.gz

    Note1: now’s a good time to press TAB, there will now be two kernels, use 2.6.29.6-0.24.smp.gcc3.4.x86_64 as this is the updated kernel that was installed during the update.

  8. Gunzip the initrd.gz file

    gunzip /tmp/initrd.gz

  9. Make a temporary working directory

    mkdir /tmp/b

    We are using /tmp/b because /tmp/a already exists as the temporary working directory from the previous article.

  10. Go into the new working directory

    cd /tmp/b

  11. Extract the contents of the initrd file into /tmp/b.

    cpio –i < /tmp/initrd

  12. Now we edit the init file to load the USB and SCSI drivers for the new initrd-usb-update.img ramdisk.

    nano init

  13. Do a search for “mount –t proc /proc /proc” and add the following underneath this line. We want to load these USB and storage modules before any other modules that’s why these entries need to be at the top of the file. Note that there is a new module crc-t10dif.ko which is required by the new kernel to boot from USB and as such must be launch during init time.

    echo “Starting Openfiler on USB”

    echo “Loading scsi_mod.ko module”

    insmod /lib/scsi_mod.ko

    echo “Starting crc-t10dif.ko module”

    insmod /lib/crc-t10dif.ko

    echo “Loading sd_mod.ko module”

    insmod /lib/sd_mod.ko

    echo “Loading sr_mod.ko module”

    insmod /lib/sr_mod.ko

    echo “Loading ehci-hcd.ko module”

    insmod /lib/ehci-hcd.ko

    echo “Loading uhci-hcd.ko module”

    insmod /lib/uhci-hcd.ko

    echo “Loading ohci-hcd.ko module”

    insmod /lib/ohci-hcd.ko

    sleep 5

    echo “Loading usb-storage.ko module”

    insmod /lib/usb-storage.ko

    sleep 5

  14. Do a search for insmod /lib/scsi_mod.ko, insmod /lib/sd_mod.ko, insmod /lib/ehci-hcd.ko, insmod /lib/uhci-hcd.ko and /lib/crc-t10dif.ko and remove these duplicate entries from the rest of the file. We do not want these loaded again.
  15. Save the file and exit with CTRL X, then Y, or if you used vi then :wq!
  16. Now we need to copy all of the modules in Step 13 into our working directory.
  17. Go to the drivers directory

    cd /lib/modules/2/kernel/drivers

    Note2: just press tab to fill in this bit, there will now be two kernels, use 2.6.29.6-0.24.smp.gcc3.4.x86_64 as this is the updated kernel that was installed during the update.

  18. Copy all of the modules in Step 13 to /tmp/b/lib.

    cp scsi/scsi_mod.ko /tmp/b/lib

    cp scsi/sr_mod.ko /tmp/b/lib

    cp scsi/sd_mod.ko /tmp/b/lib

    cp usb/host/ehci-hcd.ko /tmp/b/lib

    cp usb/host/uhci-hcd.ko /tmp/b/lib

    cp usb/host/ohci-hcd.ko /tmp/b/lib

    cp usb/storage/usb-storage.ko /tmp/b/lib

    cp /lib/modules/2.6.29.6-0.24.smp.gcc3.4.x86_64/kernel/lib/crc-t10dif.ko /tmp/b/lib

  19. Now let’s package the contents of the working directory /tmp/b into our new initrd-usb-update.img.

    cd /tmp/b

    find . | cpio –c –o | gzip -9 > /boot/initrd-usb-update.img

  20. Now all we need to do is edit the /boot/grub/menu.1st file to tell the kernel to use the new ramdisk that we just created. Remember that this new ramdisk is currently located in /boot/ (aka /dev/sda1) and is called initrd-usb-update.img.

    nano /boot/grub/menu.1st

  21. Find the line starting with initrd /vmlinux-…………………… and replace with

    initrd /initrd-usb-update.img

  22. Save the file and reboot the computer, remove the CD and allowing it to boot from the USB key. You now have your new updated Openfiler installation booting from the USB key directly.

Turn off flow control for SMB Clients

For better CIFS performance turn off your network adapter flow control. I can achieve a sustained 60 mb/s transfer between my Macbook and Openfiler once flow control is turned off. I was only achieving around 30 mb/s previously.

Turn off flow control for ESXi hosts using NFS/iSCSI to Openfiler

First understand what flow control is before performing the follow actions, the following articles provide good cases for either enabling or disabling flow control and auto-negotiation for flow control.

http://www.telecom.otago.ac.nz/tele301/student_html/ethernet-autonegotiation-flow-control.html – not to be confused with auto-negotiation of flow control.

http://virtualthreads.blogspot.com/2006/02/beware-ethernet-flow-control.html

Since this is my lab I’m going to disable flow control completely.

To do this on ESXi hosts follow these instructions or use VMware KB 1013413.

  1. Enable Remote SSH for the ESXi host first.
  2. Use your favourite SSH client and log in as root (assuming you can, disable lock-down mode etc).
  3. Run the following command to list all your vmnic interfaces, make a note of the vmnic that is used to connect to the Openfiler Server.

    esxcfg-nics –l

  4. In my case it’s just vmnic0 (I’m using a HP Microserver), type the following command to see the current flow control status of that adapter.

    ethtool –show-pause vmnic0

  5. Run the following commands to set auto-negotiation or RX flow control or TX flow control, any combination is possible.
  6. To disable flow control for sent and received traffic, use the command:

    ethtool –pause tx off rx off

  7. To disable auto-negotiation of flow control, use the command:

    ethtool –pause autoneg off

  1. Open the /etc/rc.local file using a text editor and append the same commands used in Step 6, placing each on its own line. Then save the file.
  2. For an ESXi host, save the configuration change using the command:

    /sbin/auto-backup.sh

    The commands added to the /etc/rc.local file will be executed at startup, persisting the configuration changes across reboots. As they are executed in Step 6, no reboot is required for them to take effect.

How to setup Active Directory authentication on Openfiler

Setup Active Directory Authentication

The steps must be performed in this order, otherwise you’ll get a headache trying to work out why you cannot see any Groups listed.

Go to Services | Enable SMB/CIFS server.

Click on SMB/CIFS Setup.

  1. Change the NetBIOS name to just the hostname of the server (do not include the domain).

Navigate to Accounts | Expert View. Configure for your environment, note the CAPITALIZATION of some of the fields.

Click on Use Kerberos 5 and enter your domain details, note the CAPITALIZATION of some of the fields.

Now click on Accounts | Group List and if done successfully, you should see your Domain groups.

How to install and run Openfiler on a USB key

Install Openfiler onto the USB Key

  1. Disconnect all hard disks from the server.
  2. Boot from the Openfiler CD and install Openfiler by invoking the linux expert command and installing as normal onto your USB key, taking into account your partitioning to fit onto your USB key and also making a record of which partitions are your /boot and your / partitions.

Create a new ramdisk that includes the USB drivers.

  1. When installation is complete, reboot the server with the Openfiler CD still in the drive and then type

    linux rescue

  2. Go through the menus and select your region and keyboard and skip the automatic mounting of your installed OS. We will do this manually.
  3. When the prompt appears, create a directory to mount the USB key.

    mkdir /mnt/sysimage

  4. Now mount the / of the USB key onto /mnt/sysimage.

    mount /dev/sda2 /mnt/sysimage

    Note: your / partition may be /dev/sda3 instead, depending on how you setup your partitioning during the installation of Openfiler.

  5. Now mount the boot partition of the USB key onto /mnt/sysimage/boot.

    mount /dev/sda1 /mnt/sysimage

    Note: your / partition may be /dev/sda1 instead, depending on how you setup your partitioning during the installation of Openfiler.

  6. Make the /mnt/sysimage your working environment by changing your root location so you are working on the file system on the usb key.

    chroot /mnt/sysimage

  7. Copy the current initrd file to a temporary location where we can work on it.

    cp /boot/initrd-1 /tmp/initrd.gz

    Note1: now’s a good time to press TAB

  8. Gunzip the initrd.gz file

    gunzip /tmp/initrd.gz

  9. Make a temporary working directory

    mkdir /tmp/a

  10. Go into the new working directory

    cd /tmp/a

  11. Extract the contents of the initrd file into /tmp/a.

    cpio –i < /tmp/initrd

  12. Now we edit the init file to load the USB and SCSI drivers for the new initrd-usb.img ramdisk.

    nano init

  13. Do a search for “mount –t proc /proc /proc” and add the following underneath this line. We want to load these USB and storage modules before any other modules that’s why these entries need to be at the top of the file.

    echo “Starting Openfiler on USB”

    echo “Loading scsi_mod.ko module”

    insmod /lib/scsi_mod.ko

    echo “Loading sr_mod.ko module”

    insmod /lib/sr_mod.ko

    echo “Loading sd_mod.ko module”

    insmod /lib/sd_mod.ko

    echo “Loading ehci-hcd.ko module”

    insmod /lib/ehci-hcd.ko

    echo “Loading uhci-hcd.ko module”

    insmod /lib/uhci-hcd.ko

    echo “Loading ohci-hcd.ko module”

    insmod /lib/ohci-hcd.ko

    sleep 5

    echo “Loading usb-storage.ko module”

    insmod /lib/usb-storage.ko

    sleep 5

  14. Do a search for insmod /lib/scsi_mod.ko, insmod /lib/sd_mod.ko, insmod /lib/ehci-hcd.ko and insmod /lib/uhci-hcd.ko and remove these duplicate entries from the rest of the file. We do not want these loaded again.
  15. Save the file and exit with CTRL X, then Y, or if you used vi then :wq!
  16. Now we need to copy all of the modules in Step 13 into our working directory.
  17. Go to the drivers directory

    cd /lib/modules/2/kernel/drivers

    Note2: just press tab to fill in this bit, you should only have one kernel.

  18. Copy all of the modules in Step 13 to /tmp/a/lib.

    cp scsi/scsi_mod.ko /tmp/a/lib

    cp scsi/sr_mod.ko /tmp/a/lib

    cp scsi/sd_mod.ko /tmp/a/lib

    cp usb/host/ehci-hcd.ko /tmp/a/lib

    cp usb/host/uhci-hcd.ko /tmp/a/lib

    cp usb/host/ohci-hcd.ko /tmp/a/lib

    cp usb/storage/usb-storage.ko /tmp/a/lib

  19. Now let’s package the contents of the working directory /tmp/a into our new initrd-usb.img.

    cd /tmp/a

    find . | cpio –c –o | gzip -9 > /boot/initrd-usb.img

  20. Now all we need to do is edit the /boot/grub/menu.1st file to tell the kernel to use the new ramdisk that we just created. Remember that this new ramdisk is currently located in /boot/ (aka /dev/sda1) and is called initrd-usb.img.

    nano /boot/grub/menu.1st

  21. Find the line starting with initrd /vmlinux-…………………… and replace with

    initrd /initrd-usb.img

  22. Save the file and reboot the computer, remove the CD and allowing it to boot from the USB key. You now have your Openfiler installation booting from the USB key directly.

New Openfiler for the VMwire lab

Openfiler is a very cool NAS/SAN product that is completely free. I just purchased some brand new SATA III controllers and disks so I’ve decided to migrate my storage from Freenas running on my HP Microserver to my old Dell PE SC440. Doing so will free up the HP Microserver to run vSphere and allow me to expand my current lab capacity of 1.7Tb to a whopping 7.4Tb. This article details my configuration steps and acts as a means for me to document my VMwire lab and I thought that others may benefit from my experience.

Hardware

 

Hardware

Make/Model

Details

Server

Dell PowerEdge SC440

Intel Pentium Dual Core E2180 2.0GHz, 8Gb RAM

Networking

Intel Corporation 82546EB Gigabit Ethernet PCI-X Dual Port Controller

Installed in PCI 33MHz slot

Embedded Broadcom Corporation NetXtreme BCM5754 Gigabit Ethernet PCI Express Controller

Onboard

SATA III Controllers

2 x ASUS S3U6 Dual Port SATA III and Dual Port USB 3.0 PCI-E 2.0 x4 Contollers

One installed in PCI-E 2.0 x4 slot and the other in PCI-E 2.0 x8 slot

1 x HighPoint RocketRAID 620 Dual Port PCI-E 2.0 x1 SATA III Controller

Installed in PCI-E 2.0 x1 slot

Fibre Channel Controller

1 x Brocade 2340 Single Port 2Gb Fibre Channel PCI-X Adapter

Installed in PCI 33MHz slot

Boot Device

Kingston DataTraveller+ 2Gb USB Key

Openfiler boot device

Storage Disks

5 x Seagate ST2000DL03-9VT1 2Tb 5400RPM 64Mb Cache SATA III Disks

 

Installed OS

Openfiler NAS/SAN

2.6.26.8-1.0.11.smp.gcc3.4.x86_64 (SMP)

 

The Total SATA 3 ports is 6, plus an additional 4 SATA 2 ports from the onboard adapter.

I have disabled all the SATA 2 ports from within the BIOS as I will only be using the SATA 3 ports from the three Add-in cards to support my 5 Seagate disks in a RAID-5 aggregate.

Overview of the Openfiler Setup

I have already setup my Openfiler Server by following the instructions from https://vmwire.com/2011/06/12/how-to-install-and-run-openfiler-on-a-usb-key/ to install Openfiler onto the Kingston DataTraveller+ 2Gb USB Key. Networking is setup and everything is working nicely ready for the configuration of the new RAID-5 Array.

Below is a screenshot of the Openfiler Hardware Information.

Setting up the Software RAID

I’m using software RAID due to the fact that SATA III RAID Controllers with 5 or more ports are currently still very expensive. The HighPoint RocketRaid 630 and the ASUS S6U3 Controllers are relatively cheap and can be picked up for less than £20 each.

  1. You can view the list of block devices available to use on Openfiler by navigating to Volumes | Block Devices.

Openfiler has successfully picked up the 5 new disks and we can now manage them.

To set up the Software RAID we must first partition the disks as “RAID array member” partitions.

To do this you must click in each of the block devices and create a “RAID array member” partition. Remember to leave /dev/sda alone as this is the boot device.

  1. Clicking on /dev/sdb brings up the partitioning page.
  2. Select RAID array member from the Partition Type drop down menu and then just click on Create.

Continue creating the partitions on the remaining four disks by repeating Step 1.

Once complete the Block Device Management should look something like this.

Now we are ready to create the RAID Array. To do this we need to navigate to Volumes | Software RAID and create the array.

I’m going to create a RAID-5 (parity) array with a 64kB chunk size.

The Seagate disks only have about 1.82Tb usable capacity, so a RAID-5 array with 5 disks would give a total of 7.4Tb usable capacity.

Setting up the Volume Group

We can create the Volume Group by navigating to Volumes | Volume Groups.

I’ve decided to create a single volume group for all my services – CIFS, NFS, iSCSI and FC. We will then create Volumes from this single Volume Group to carve up the storage. The resulting Volume Group Management page will then look something like this.

That’s our Volume Group setup complete. You would never need to revisit the Volume Group Management page ever again, unless you either rebuild your disks or create new Volume Groups from new disks. For reference the path for this Volume Group is /dev/md0/volg_raid5/.

Setting up a new Volume

Now we can start carving up our 7.4Tb usable capacity by creating Volumes and then assigning these Volumes to specific services.

We can manage this by using the Add Volume page. I’m going to add a new 1Tb NFS volume for my VMware virtual machines.

To do this navigate to Volumes | Add Volume.

Upon creating this new Volume, the path for this Volume will then be /dev/md0/volg_raid5/nfs01/.

Setting up a new Sub-folder on a Volume

Now that we have created a Volume, we can now create a Sub-folder in which we can make a Share. I’m going with a 1:1 allocation of Volume to Sub-folder in my lab, so the naming conventions reflect this.

To do this, navigate to the Shares page. The screen should be similar to this.

We can create the new Share by clicking on the VMware Virtual Machines link.

Now we can create a new sub-folder, I’m going to call mine nfs01-vm01.

The Shares page will now display the following.

Setting up a new Share

To create the actual nfs01-vm01 NFS share that is mounted on our vSphere servers we need to make the share from the Sub-folder created in “Setting up a new Sub-folder from a Volume”.

  1. Click on the nfs01-vm01 link and then click on the Make Share button.
  2. Select the “Public guest access” radio button and click on Update.

  1. Now scroll to the bottom of the page and click on the “RW” radio button underneath the NFS column, and then click on Update.

Note that you need to have setup network access configuration prior to doing any of these steps. This is not covered in this guide.

Again, for reference this nfs01-vm01 NFS share for use by VMware vSphere hosts has a path of /mnt/volg_raid5/nfs01/nfs01-vm01/.

Mounting the NFS share

The NFS volume is now ready to be mounted by VMware vSphere hosts. Just use /mnt/volg_raid5/nfs01/nfs01-vm01 as the “Folder” mount point.

Happy days!

[NEW]

Setting up a new Volume for CIFS

Now I can start carving up my remaining 6.4Tb usable capacity by creating Volumes and then assigning these Volumes to specific services. For this part I’m going to create a new Volume for CIFS.

We can manage this by using the Add Volume page. I’m going to add a new 3Tb CIFS volume for my Windows file storage.

To do this navigate to Volumes | Add Volume.

Upon creating this new Volume, the path for this Volume will then be /dev/md0/volg_raid5/cifs01/.

Setting up a new Sub-folder on a Volume for CIFS

Now that we have created a new Volume for CIFS, we can now create a Sub-folder in which we can make a Share. I’m going with a 1:1 allocation of Volume to Sub-folder in my lab, so the naming conventions reflect this.

To do this, navigate to the Shares page. The screen should be similar to this.

We can create the new Share by clicking on the Windows File Share.

Now we can create a new sub-folder, I’m going to call mine cifs01-smb01.

The Shares page will now display the following.

Setting up a new Share for CIFS

To create the actual cifs01-smb01 CIFS share that can be used for SMB file storage we need to make the share from the Sub-folder created in “Setting up a new Sub-folder on a Volume for CIFS”.

  1. Click on the cifs01-smb01
    link and then click on the Make Share button.

  1. Select the “Controlled access” radio button and click on Update.

  1. Now scroll to the Group access configuration section in the middle and select the primary group for this CIFS share and also set Read or Read/Write Permissions for this share based on your Active Directory Groups. I’ve already created a Security Group called “openfiler cifs users” with which my AD account VIRTUAL\Hugo.Phan is a member of. Once done click Update.

    Note that to use Active Directory authentication, you must set up Authentication under the Accounts section. I’ve written a guide here https://vmwire.com/2011/06/13/how-to-setup-active-directory-authenticaton-on-openfiler/.

  1. Now scroll all the way down to the bottom and configure the Host access configuration to the /mnt/volg_raid5/cifs01/cifs01-smb01/ Volume. Choose the options relevant to your environment, I’m making this share R/W accessible by all my devices in my network – 192.168.200.0/24. Then just click on Update to finish.

 

Note that you need to have setup network access configuration prior to doing any of these steps. This is not covered in this guide.

Again, for reference this cifs01-smb01 CIFS share for use as network file storage over the network has a path of /mnt/volg_raid5/cifs01/cifs01-smb01/.

Connecting to the SMB Share

The SMB share is now ready to be used by client computers. To connect just open a UNC path to it. of.virtual.local is the FQDN of my Openfiler Server.

Now enter your domain credentials.

Job done!

VMware Tools RPM Installation with PXEBOOT and Kickstart

Purpose

This article explains how to use the Operating System Specific Packages (OSPs) to install VMware Tools during the PXEBOOT and Kickstart of a Linux guest OS running on vSphere 4.1 and above.

Background

As of vSphere 4.1, the VMware Tools RPMs are no longer available on the CD image linux.iso. To install you must use the tar.gz package and install it manually.

To quote the vSphere 4.0 U2 release notes:

“The VMware Tools RPM installer, which is available on the VMware Tools ISO image for Linux guest operating systems, has been deprecated and will be removed in a future ESX release. VMware recommends using the tar.gz installer to install VMware Tools on virtual machines with Linux guest operating systems.”

This future ESX release is indeed vSphere 4.1. The reason that the RPM packages are no longer available on the VMware Tools CD image for Linux guest operating systems ISO is to reduce the footprint of ESXi. Therefore the linux.iso no longer contains the RPM installer.

Not only is this a real pain for customers who have always deployed packages onto their Linux guests with RPMs but also removing the VMware Tools RPM also causes issues with the deployment of Linux guests through PXEBOOT and Kickstart methods.

Yes, you could argue that there is the option of using vCenter Templates along with Customization Specifications (CS) for Linux, and I for one love the fact that the CS works for Linux very well. This is not always an option for a customer who has already configured their entire environment to deploy both ESXi servers and virtual machine guests with PXEBOOT and Kickstart.

The only option here is to perform manual installations of VMware Tools using the tar.gz file which is included with vSphere 4.1. This of course poses a few issues.

One: this is a repetitive task which is both time consuming and problematic if the number of new VMs to provision is high.

Two: some security conscious organisations do not allow the gcc Compiler and/or the Linux Kernel sources to be installed on the VM guests. Both the gcc Compiler and the Linux Kernel sources are mandatory for the successful installation of VMware Tools using the tar.gz file.

Three: if the guest VM is using a VMXNET2 or VMXNET3 ethernet adapter, then the guest VM will not have compatible drivers if VMware Tools is not installed.

These are just three of the reasons that I’ve seen in the wild, there are more but I won’t go into much detail here.

So how do we go about solving this?

Have a read of KB article 1024047: ESXi 4.x does not include RPM format for VMware Tools.

The solution here is to use the Operating System Specific Packages (OSPs). VMware Tools OSPs allow you to use your operating system’s native update mechanisms to automatically download, install, and manage VMware Tools for the supported operating systems. For more information regarding OSPs, see http://www.vmware.com/download/packages.html.

For this guide, we are referring to RHEL’s yum update mechanism.

This guide details how you can use the http://www.vmware.com/pdf/osp_install_guide.pdf to prepare your Build Server to enable the automated deployment of VMware Tools during a guest RHEL in a PXEBOOT environment.

Resolution

For this guide, I will be using the Ultimate Deployment Appliance 2.0 (uda20.build17) to deploy a RHEL 5.5 x64 virtual machine.

First we need to prepare to install the operating specific packages (OSPs) for RHEL 5 guest operating systems on the Build Server. We do this by preparing the directory structure for the rpms, and placing these in the correct location on the Build Server for the guest VM to download during a Kickstart installation.

Perform the following on your workstation.

The OSPs are located on the VMware Tools packages Web site at http://packages.vmware.com/tools.

  1. First download the entire directory that contains the package relevant to your operating system, for this guide I will be using the RHEL x86_64 packages located at http://packages.vmware.com/tools/esx/4.1u1/rhel5/x86_64/index.html.

     

  2. Clicking on this link will give you the following list of files.

  3. Do a right-click Save-As and save these files to a location of your choice. I settled for a folder called vmwaretools on my Desktop.

     

  4. Also create a folder within vmwaretools called repodata, and also download the files from http://packages.vmware.com/tools/esx/4.1u1/rhel5/x86_64/repodata into your vmwaretools/repodata directory.

     

  5. Once these two tasks have been complete you will see the following directory contents of vmwaretools.

  6. And the contents of vmwaretools/repodata.

  7. Now you will need to download the VMware Packaging Public Keys.

     

  8. Create a directory called keys within vmwaretools/

     

  9. Point your web browser to http://packages.vmware.com/tools/keys and download all of the files into your vmwaretools/keys directory.

     

  10. The contents of the keys directory should look like this

     

  11. You should now have a vmwaretools folder structure that looks like this

     

  12. Now copy the entire vmwaretools directory to your Build Server using your favourite SCP application. I’m using the UDA 2.0, so I will be placing vmwaretools in /var/public/www/kickstart/rhel/. “rhel” is the ‘Flavor’ name that I called my RHEL 5 OS configuration.

Perform the following on the Build Server.

  1. On the UDA 2.0 the contents of the /var/public/www/kickstart/rhel/vmwaretools directory should now look like this.

     

  2. If you navigated to http://<ip_address_of_uda/kickstart/rhel/vmwaretools/, you should be able to see the same folder structure. If you can’t then you may need to do a chmod 755 on the vmwaretools directory and its contents.

     

  3. Now edit your kickstart file to look something like this.

    %post

    # Import the VMware Packaging Public Keys from the Build Server.

    rpm –import http://%5BUDA_IPADDR%5D/kickstart/%5BTEMPLATE%5D/vmwaretools/key/VMWARE-PACKAGING-GPG-DSA-KEY.pub

    rpm –import http://%5BUDA_IPADDR%5D/kickstart/%5BTEMPLATE%5D/vmwaretools/key/VMWARE-PACKAGING-GPG-RSA-KEY.pub

    # Create and edit the VMware repository directory and file, note that this points to the Build Server during Kickstart build. Once VMware Tools is installed we shall change the baseurl to point to the VMware OSP URL.

    # Add the following contents to the repository file and save

    cat > /etc/yum.repos.d/vmware-tools.repo <<\EOF1

    [vmware-tools]

    name=VMware Tools

    baseurl=http://192.168.200.30/kickstart/rhel/vmwaretools

    enabled=1

    gpgcheck=1

    EOF1

    # Install VMware Tools by accepting all defaults

    yum install -y vmware-tools

    # Delete the customised vmware-tools.repo file and reconfigure with the baseurl to point to the VMware OSP URL for RHEL5 64-bit.

    rm –f /etc/yum.repos.d/vmware-tools.repo

    cat > /etc/yum.repos.d/vmware-tools.repo <<\EOF2

    [vmware-tools]

    name=VMware Tools

    baseurl= http://packages.vmware.com/tools/esx/4.1u1/rhel5/x86_64

    enabled=1

    gpgcheck=1

    EOF2

  4. I have attempted to use the VMware Packaging Public Keys from the VMware OSP repository URL but this does not work, anyone care to enlighten me in the comments below?

    I tried this:

    # Import the VMware Packaging Public Keys

    rpm –import http://packages.vmware.com/tools/keys/VMWARE-PACKAGING-GPG-DSA-KEY.pub

    rpm –import http://packages.vmware.com/tools/keys/VMWARE-PACKAGING-GPG-RSA-KEY.pub

     

  5. I also attempted to use the VMware OSP URL directly in the vmware-tools.repo file without luck either. If you can get this to work then please comment below and I’ll update the post.

You should now be able to deploy a guest Linux VM running on vSphere ESXi 4.1 through PXEBOOT and Kickstart and have the VM automatically install VMware Tools. Plus, all future updates of VMware Tools can just be done by invoking a yum install -y vmware-tools.

Enabling VMXNET 3 for PXEBOOT and KICKSTART of RHEL Virtual Machines

Purpose

This guide shows you how to create a new initrd.img with integration for the VMXNET3 driver to allow guest RHEL virtual machines equipped with the VMXNET 3 driver to Kickstart build RHEL using PXEBOOT.

Background

VMware’s VMXNET 3 network adapter supports PXE booting but RHEL 5 does not have a driver that supports network installations using the default initrd.img.

If you tried to perform automated installation using kickstart with the standard initrd.img you will see the following screen:

 

This is because Anaconda does not recognise the VMXNET 3 device and therefore is not able to load a driver for it.

This guide shows you how to create a new initrd.img with integration for the VMXNET3 driver.

For the impatient few, I’ve made the resulting initrd.img.vmxnet file available for download, it is a clean ramdisk image that was made using the steps below.

It is the PXEBOOT RAMDISK with the VMXNET3 driver for RHEL5 (created from the rhel-server-5.5-x86_64-dvd) [2.6.18-194.el5].

Tested and working to support VMXNET3 in Anaconda.  You can download it from here initrd.img.vmxnet and then jump all the way to Step 18 to place it on your Build Server.

Prerequisites

Prepare a Reference Virtual Machine

First create a new reference virtual machine with the following hardware specifications:

Configuration Value
VM Hardware Version Hardware Version 7
Network Adapter VMXNET 3
SCSI Controller LSI Logic SAS
SYSTEM .vmdk Device SCSI 0:0 15Gb
Remove Floppy Device Yes

Install RHEL (rhel-server-5.5-x86_64-dvd) by mounting the ISO to the VM and then perform a manual installation of VMware Tools. This will give you the reference virtual machine with which you will then use to copy the vmxnet.ko and vmxnet3.ko from.

Enable sshd services on the Reference VM by typing:

/etc/init.d/sshd start

This will make it a lot easier to copy files to your Build Server.

Prepare your Build Server

Create your own PXEBOOT and Kickstart installation or use one that you already have. For my example I will be using the Ultimate Deployment Appliance 2.0 (uda20.build17).

Most of the configuration is done on Build Server so by all means enable SSHD to make things a lot easier for you.

My Build Server IP is 192.168.200.30.

Integrating VMXNET 3 into initrd.img

At this point you should have SSH access to both your Build Server and your Reference VM.

Perform the following on the Build Server.

1.    Make some working directories to work in

mkdir /tmp/workingdir

mkdir /tmp/workingdir/initrd

mkdir /tmp/workingdir/modules

Perform the following on the Reference VM.

2.    Obtain the initial ramdisk initrd.img from the pxeboot directory, this file can be obtained from the rhel-server-5.5-x86_64-dvd ISO file which should still be connected to the Reference VM. The initrd.img file can be found in the images/pxeboot directory.

3.    Mount the ISO image

mount /dev/cdrom /media

4.    Copy the initrd.img to the Build Server

scp /media/images/pxeboot/initrd.img root@192.168.200.30:/tmp/workingdir/

5.    We now need to ascertain the PCI and Device ID of the VMXNET 3 network adapter by first running

Lspci

        Note that our VMware VMXNET3 Ethernet Controller lives on 0b:00:0

6.    With this information we can obtain the HEX number for the device by running

lspci –n

Note the HEX value for device 0b:00.0 is 15ad:07b0.

Perform the following on the Build Server.

7.    Unpack the initrd.img file to allow us to amend the ramdisk, you should be in /tmp/workingdir/initrd/

zcat ../initrd.img | cpio –id

 

8.    Extract the modules.cgz archive from within the initrd subdirectory

cd /tmp/workingdir/modules

zcat ../initrd/modules/modules.cgz | cpio –id

Perform the following on the Reference VM.

9.    Copy the vmxnet*.ko modules from the Reference VM over to the Build Server. The vmxnet*.ko files are located in /lib/modules/2.6.18-194.el5/misc

scp /lib/modules/2.6.18-194.el5/misc/vmxnet*.ko root@192.168.200.30:/tmp/workingdir/modules/2.6.18-194.el5/x86_64/

10.    Copy the modules.alias file from the Reference VM to the Build Server for use later on. This file contains the vmxnet entries and is located at /lib/modules/2.6.18-194.el5/

scp /lib/modules/2.6.18-194.el5/modules.alias root@192.168.200.30:/tmp/workingdir/initrd/modules/modules.alias.reference

Perform the following on the Build Server.

11.    Change permissions for the two new vmxnet*.ko files, you should be in /tmp/workingdir/modules/2.6.18-194.el5/x86_64/

chmod 744 vmxnet*

12.    Pack up the new modules.cgz which now includes the vmxnet*.ko modules and create a new cpio archive to replace the old modules.cgz.

cd /tmp/workingdir/modules

find . | cpio -o -H crc | gzip -9 > /tmp/work/initrd/modules/modules.cgz

    After a few seconds the operation will complete.

13.    Modify the pci.ids file with an entry for the VMXNET 3 adapter.

cd /tmp/workingdir/initrd/modules

nano pci.ids

14.    Search for VMware and add the following line under the Abstract SVGA Adapter

07b0    VMware Adapter

    The 07b0 number here is whatever was obtained from Step 5 above.

15.    Edit the module-info file and add the following entries for the VMXNET and VMXNET 3 Adapters, put it in under ‘v’ to keep it in alphabetical descending order. You should still be in /tmp/workingdir/initrd/modules/

nano /tmp/workingdir/initrd/modules/module-info

vmxnet

    eth

    “VMware vmxnet Ethernet driver”

 

vmxnet3

    eth

    “VMware vmxnet3 Ethernet driver”

16.    Import the vmxnet entries from the Reference VM’s module.alias file (now called module.alias.reference) into the Build Server’s module.alias file.

grep vmxnet /tmp/workingdir/initrd/modules/modules.alias.reference >> /tmp/workingdir/initrd/modules/modules.alias

The contents of the new module.alias file should look like this.

17.    Package the new initrd.img ramdisk up with all the changes done above.

cd /tmp/workingdir/initrd

find . | cpio -o -H newc | gzip -9 > /tmp/workingdir/initrd.img.vmxnet

18.    Copy the new initrd.img.vmxnet into the PXEBOOT environment. On UDA2.0 this location is /var/public/tftproot/

cp /tmp/workingdir/initrd.img.vmxnet /var/public/tftproot/

19.    Edit your PXEBOOT configuration to use the new initrd.img.vmxnet file instead of the standard initrd.img file. My example uses the UDA.

cd /var/public/conf/templates/

nano rhel.dat

20.    On the line CMDLINE=, edit the initrd= entry to point to the new initrd.img.vmxnet instead.

CMDLINE=ks=http://[UDA_IPADDR]/kickstart/[TEMPLATE]/[SUBTEMPLATE].cfg initrd=initrd.img.vmxnet ramdrive_size=8192

21.    That’s it, now PXEBOOT a VM and it will now be able to Kickstart using the VMXNET3 network adapter.