USX 2.1 Whats New?

USX 2.1 is now available and has some minor improvements over previous versions. There are some major milestones and some minor improvements that are part of this release.

USX Volume Dashboard
USX Volume Dashboard

Major milestones:

  1. VMware support for USX on the VMware HCL, the VMware KB is in this link.
  2. VMware support for Atlantis NAS VAAI Plugin, the VMware Compatibility Guide for USX is in this link.
    • The Atlantis NAS VAAI Plugin is now officially supported as VMwareAccepted: VIBs with this acceptance level go through verification testing, but the tests do not fully test every function of the software. The partner runs the tests and VMware verifies the result. Today, CIM providers and PSA plugins are among the VIBs published at this level. VMware directs support calls for VIBs with this acceptance level to the partner’s support organization.
    • Atlantis NAS VAAI Plugin can now be installed using VMware Update Manager.

Minor improvements:

  1. Added Incremental Backups for SnapClones for Simple Volumes (VDI use cases).
  2. Added Session Timeout – A new preference was added so that you can configure the number of minutes that a session can be idle before it is terminated.
  3. Added vCenter hierarchy view for Mount and Unmount of Volumes.
  4. Added new Volume Dashboard that shows availability & status reporting improvements, including colour codes for various conditions, all of which roll up to an redesigned volume dashboard that provides an overview of a volume’s configuration, resource use, and health
  5. Improved Status Updates.
  6. Added Active Directory and LDAP Authentication to USX Manager.
  7. Added option to have one node failure for USX clusters of up to 5 nodes. Previously this was up to 4 nodes.
  8. REST API to support changing the USX database server.

Coming to VMworld US? Get yourself a VVOL compliant all software storage array

This article details all of my and Atlantis’ activities at VMworld US. Read more to get an introduction of what we will be doing and announcing and a sneak peek at our upcoming technology roadmap that solves some of the major business issues concerning performance, capacity and availability today. It is indeed going to be a VMworld with ‘no limits’ and one of the great innovations that we will be announcing is Teleport. More on this later!

Teleport your files
No limits with Teleport

I’ll be at in San Francisco from Saturday 23rd August until Thursday 28th August where I’ll be representing the USX team and looking after the Hands on Labs, running live demos and having expert one on ones at the booth. Come and visit to learn more about USX and how I can help you get more performance and capacity out of your VMware and storage infrastructure. I’d love to hear from you.

Where can you find me?

Atlantis is a Gold sponsor this year with Hands on Labs, a booth and multiple speaking sessions. Read on to find out what we’ll be announcing and where you can find my colleagues and me.

Booth in the Exhibitor Hall

I’ll mostly be located at booth 1529, you can find me and my colleagues next to the main VMware booth, just head straight up pass the HP, EMC, NetApp and Dell stands and come speak to me on how USX can help you claim more performance and capacity from these great enterprise storage arrays.

Speak to me about USX data services and I’ll show you some great live demos on how you can reclaim up to 5 times your storage capacity and gain 10 times more performance out of your VMware environment.

Here’s one showing USX as storage for vCloud Director in a Service Provider context and also for Horizon View.

If that’s not enough then come and speak to me about some of these great innovations:

  • If you’ve been waiting for a VVOL compliant all software vendor to try VVOLs with vSphere 6 beta then wait no more.
  • VMware VVOL Support – all of your storage past, present and future instantly become VVOL compliant with USX.
  • Teleport – the vMotion of the storage world which gives you the ability to move VMs, VMDKs and files between multiple data centers and the cloud in seconds to improve agility (if you’re thinking its Storage vMotion, trust me it is not).
  • And more….
Location in Solutions Exchange


We have three breakout sessions this year, two of them with our customers UHL and Northim Bank where Dave Rose and Erick Stoeckle respectively will take you through how they use USX in production.

The other breakout session is focused on VVols, VASA, VSAN and USX Data Services and will be delivered by our CTO and Founder Chetan Venkakesh (@chetan_). If you have not had the pleasure to hear Chetan speak before, then please don’t miss this opportunity. The guy is insane and uses just one slide with one picture to explain everything to you. He is a great storyteller and you shouldn’t miss it – even if it’s just for the F bombs that he likes to drop.

Chetan will also do a repeat 20-minute condensed session in the Solutions Exchange for a brain dump of Atlantis USX Data Services. Don’t miss this! Chetan will take you through the great new technology in the Atlantis kitbag.

Session Title Speaker(s) When Where
STP3212 – Unleashing the Awesomeness of the SDDC with Atlantis USX Chetan Venkatesh – Founder and CTO, Atlantis Computing Tuesday, Aug 26, 11:20 AM – 11:40 AM Solutions Exchange Theater Booth 1901
INF2951-SPO – Unleashing SDDC Awesomeness with Atlantis USX: Building a Storage Infrastructure for Tier 1 VMs with vVOLS, VASA, VSAN and Atlantis USX Data Services Chetan Venkatesh – Founder and CTO, Atlantis Computing Wednesday, Aug 27, 12:30 PM – 1:30 PM Somewhere in the Moscone (TBC)
EUC2654 – UK Hospital Switches From Citrix XenApp to VMware Horizon Saving £2.5 Million and Improving Patient Care Dave Rose – Head of Design authority, UHL
Seth Knox – VP Products, Atlantis Computing
Wednesday, Aug 27, 1:00 PM – 2:00 PM Somewhere in the Moscone (TBC)
STO2767 – Northrim Bank and USX Erick Stoeckle , Northrim Bank
Nishi Das – Director of Product Management, ILIO USX, Atlantis Computing Inc.
Thursday, Aug 28, 1:30 PM – 2:30 PM Somewhere in the Moscone (TBC)

Hands on Labs

You can find the hands on labs in the Hands on Labs hall, I’ll also be here to support you if you’re taking this lab. The Atlantis USX HOL is titled:

HOL-PRT-1465 – Build a Software-based Storage Infrastructure for Tier 1 VM Workloads with Atlantis USX Data Services.

This HOL consists of three modules, each of which can be taken separately or one after the other.

Modules 1 and 2 are read and click modules where you will follow the instructions in the lab guide and create the USX constructs using the Atlantis USX GUI.

Module 3 however uses the Atlantis USX API browser to quickly perform the steps in Module 1 with some JSON code.

All three modules will take you approximately an hour and a half to complete.

I had an interesting time writing this lab which was a balancing exercise in working with the limited resources assigned to my Org VDC. Please provide feedback on this lab if you can, it’ll help with future versions of this HOL. Just tweet me at @hugophan. Thanks!

Note that performance will be an issue because we are using the VMworld Hands on Labs hosted on Project NEE/OneCloud. This is a vCloud Director cloud in which the ESXi servers that you will see in vCenter are actually all virtual machines. Any VMs that you run on these ESXi servers will themselves be what we call nested VMs. In some cases you could actually see 2 more or nested levels. How’s that for inception? Just be aware that the labs are for a GUI, concept and usability feel and not for performance.

If you want to see performance, come to our booth!

VMware Hands on Labs with 3 layers of nested VMs!

Hands on Labs modules

Module #


Module Title

Atlantis USX – Deploying together with VMware VSAN to deliver optimized local storage

Module Narrative

Using Atlantis USX, IT organizations can pool VSANs with existing shared storage, while optimizing it with Atlantis USX In-Memory storage technology to boost performance, reduce storage capacity and provide storage services such as high availability, fast cloning and unified management across all datacenter storage hardware.The student will be taken through how to build a Hybrid virtual volume that optimizes VMware VSAN allowing it to delver high performing virtual workloads from local storage.

  • Build an USX Capacity Pool using the underlying VMware VSAN datastore
  • Build an USX performance pool from local server RAM
  • Build a Hybrid USX virtual volume suitable for running SQL Server
  • Present the Atlantis USX virtual volume to ESX over NFS

Module Objectives
Development Notes

A customer has built a resilient datastore from local storage using VSAN. This is then pooled by Atlantis USX to provide the Deduplication and I/O optimization that server workloads require. A joint whitepaper of this solution has already been written here:
Estimated module duration: 45 minutes


Module #


Module Title

Atlantis USX – Build In Memory Storage

Module Narrative

With Atlantis USX In-Memory storage optimization, processing computationally extensive analytics becomes easier and more cost effective allowing for an increased amount of data being processed per node and reduced the time to complete these IO intensive jobs, workloads may include Hadoop, Splunk, MongoDB.During this lab the student will be taken through how to build an Atlantis USX virtual volume using local server memory.

  • Build an USX Performance Pool aggregating server RAM from a number of ESX hosts.
    • Log into the web based management interface, and connect it the vCenter hosting the ESX infrastructure
    • Export the memory from the three ESX hosts onto the network using Atlantis aggregation technology.
    • Combine the discrete RAM resource into a protected performance pool with the Pool creation wizard.
  • Build an In-Memory virtual volume suitable for running a big data application
    • Run through the Create Virtual Volume wizard selecting In-Memory and deploying the In-Memory Virtual Volume
  • Present the Virtual Volume (datastore) to ESX over NFS.
    • Add the newly created datastore into ESX.

Module Objectives
Development Notes

The use case for this lab is increasing application performance by taking advantage of the storage optimization features in Atlantis USX.Estimated module duration: 30 minutes


Module #


Module Title

Atlantis USX – Using the RESTful API to drive automation and orchestration to scale a software-based storage infrastructure

Module Narrative

Atlantis USX has a powerful set of RESTful APIs. This module will give you insight into those APIs by using them to build out a Virtual Volume. In this module you will:

  • Connect to the USX API browser and review the available APIs
  • Create a Capacity and Memory Pool with the API
  • Create a Virtual Volume with the API

Module Objectives
Development Notes

The intent of this lab is to provide an example of how to use the Atlantis USX RESTful API to deploy USX at scale.Estimated module duration: 15 minutes


Oculus giveaway! See the reality in software defined storage

That’s right! I’ll be giving some of these away at the booth, make sure you stop by to see the new reality in software defined storage!

You can also pick up some of the usual freebies like T-shirts, pens, notepads etc.

There are also Google Glasses, Chromecasts, quad copters and others. We’re also working on something special. Watch this space.

Live Demos at the Booth

Come and speak to me and my colleagues to learn how USX works. We will be running live demos of the following subjects:

  • USX – Storage Consolidation for any workload on any storage.
  • USX – Database Performance Acceleration.
    • Run Tier-1 workloads on commodity hardware.
    • Run Tier-1 high performance workloads on non all-flash or hybrid storage arrays.
  • USX – All Flash Hyper-Converged with SuperMicro/IBM/SANdisk.
  • USX – Teleport (think vMotion for VMs, VMDKs and files over long distances and high latency links). Come talk to me for a live demo.
Beam me up Scotty!
  • USX Tech Preview – Cloud Gateway – using USX data services with AWS S3 as primary storage.
  • USX – VDI on USX on VSAN.
  • VDI – NVIDIA 3D Graphics.

Atlantis Events

SF Giants Game

SF Giants Game, Mon, Aug 25th at 19:00. Please contact your Atlantis Representative or ping me a note if you haven’t received an invite.

USX Partner Training & Breakfast, Wed, Aug 27th at 08:00. Please contact your Atlantis Representative or ping me a note if you’re an Atlantis Partner but have not received an invite.

Let’s meet up!

If you’re at VMworld or in the SF Bay area then let’s meet up and expand our networks.

Event Date Hours Event Name Where Register
Sat, Aug 23rd 19:00 – 22:00 VMworld Community Kickoff Johnny Foley’s, 243 O’Farrell Street
Sun, Aug 24th 13:00 – 16:00 #Opening Acts City View at Metreon
Sun, Aug 24th 15:00 – 17:00 #v0dgeball Charity Tournament SOMA Rec Center – Corner of Folsom and 6th Streets
Sun, Aug 24th 16:00 – 19:00 VMworld Welcome Reception Solutions Exchange, Moscone Center n/a
Sun, Aug 24th 20:00 – 23:00 #VMunderground City View at Metreon
Mon, Aug 25th 19:00 – 23:00 #vFlipCup VMworld Community TweetUp Folsom Street Foundry
Tues, Aug 26th 16:30 – 18:00 Hall Crawl Solutions Exchange, Moscone Center n/a
Tues, Aug 26th 19:00 – 22:00 #VCDX, #vExpert Party E&O Restaurant & Lounge, 314 Sutter St Invite only
Tues, Aug 26th 20:00 – 23:00 #vBacon Ferry Building, 1 Sausalito
Wed, Aug 27th 17:00 – 19:00 VMware vCHS Tweetup 111 Minna
Wed, Aug 27th 19:00 – 22:00 VMworld Party Moscone Center n/a

Can’t meet up?

Follow me and my colleagues on Twitter for live updates during VMworld and send us messages and questions, we’d love to hear from you.

Hugo Phan @hugophan

Chetan Venkakesh @chetan_

Seth Knox @seth_knox

Mark Nijmeijer @MarkNijmeijerCA

Gregg Holzrichter @gholzrichter

Toby Colleridge @tobyjcol

Is Atlantis USX the future of Software Defined Storage?

More informative info on #USX and a great write-up from Storage Swiss. - The Home of Storage Switzerland

Software Defined Storage (SDS) has certainly caught the attention of IT planners looking to reduce the cost of storage by liberating them from traditional storage hardware lock-in. As SDS evolves the promise of lower storage CAPEX, increased deployment and architecture flexibility, paired with lower OPEX through decreased complexity may emerge from suppliers of this technology. Atlantis USX looks to lead this trend, claiming to deliver all-flash array performance for half the cost of a traditional SAN.

Atlantis USX Architecture

From an architectural perspective, USX has the same roots as Atlantis’s VDI solution, except that it’s focused on virtual server workloads instead of virtual desktops. As part of the enhancements for server virtualization, USX has added the ability to pool any storage resource between servers (SAN, NAS, Flash, RAM, SAS, SATA), it’s added data protection to ensure reliability in case of a host failure and has built its own high availability…

View original post 1,335 more words

Virtual Volumes – Explained with Carousels, Horses and Unicorns – in pictures


[Tongue in cheek. There’s no World Cup on today so I made this. Please don’t take this too seriously.]

A SAN is like a carousel

  • It provides capacity (just like a carousel) and performance (when the carousel goes around).
  • People ride on static horses bolted to the carousel and try to enjoy the ride.
  • This horse is like a LUN. The horse does not know who is riding it.
  • Everybody travels at the same speed unless you happen to sit on the outside where things go a little bit faster.
  • The speed is relative to how fast the carousel rotates and how quickly you can get to an outside seat (if you want that extra speed and wind through your hair).
  • If you want to guarantee an outside seat, you can get to the front of the queue by having a FastPass+.
  • Get a bigger motor, or increase the speed, the carousel…

View original post 168 more words

Virtual Volumes – Explained with Carousels, Horses and Unicorns – in pictures

[Tongue in cheek. There’s no World Cup on today so I made this. Please don’t take this too seriously.]

A SAN is like a carousel

  • It provides capacity (just like a carousel) and performance (when the carousel goes around).
  • People ride on static horses bolted to the carousel and try to enjoy the ride.
  • This horse is like a LUN. The horse does not know who is riding it.
  • Everybody travels at the same speed unless you happen to sit on the outside where things go a little bit faster.
  • The speed is relative to how fast the carousel rotates and how quickly you can get to an outside seat (if you want that extra speed and wind through your hair).
  • If you want to guarantee an outside seat, you can get to the front of the queue by having a FastPass+.
  • Get a bigger motor, or increase the speed, the carousel will respond to the required needs.
  • Everybody experiences the same relative performance even though they may want different things – to go faster or to go slower.
  • If the motor dies, the carousel is closed.

A Virtual Volume is like a horse

  • It has a trusting relationship with its rider and the rider with it.
  • It can roam free on green pastures and prairies.
  • It can go fast or slow.
  • It can be large or small.
  • It can go faster or slower than another horse.
  • A small horse can carry a small rider.
  • A large horse can carry a large rider.
  • A small horse could go faster than a big horse and vice versa.
  • You can go for a ride with your horse and bring another one just like it. If it gets tired, let it go and jump on the other horse.
  • It can by put out to stud to make more little foals just like it.
  • A horse can do all these things.

You get the point right? Enjoy the picture!

The Storage Unicorn
The Storage Unicorn

Atlantis USX Data Services with Hyper-Converged Architecture – Web Scale, Virtual Volumes & In-line Deduplication


Atlantis USX has some very cool technology which I’ve had the pleasure to ‘play’ with over the past few weeks. In these series of posts I’ll attempt to cover the various technologies within the Atlantis USX stack.

The key technologies in the Atlantis USX In-Memory Data Services are:

  1. Inline IO and Data de-duplication
  2. Content aware IO processing
  3. Compression
  4. Fast Clone
  5. Storage Policies
  6. Thin Provisioning

This post focuses on Inline IO and Data de-duplication (or just dedupe for short) and Fast Clone and how these rich data services enable a hyper converged solution to outperform enterprise storage arrays.

Why would you use Atlantis USX?

The best way to approach this is to look at some use cases: Crazy as it seems, Atlantis USX delivers All-Flash Array performance but also gives five times the capacity of traditional storage arrays. Doing this with 100% software, no hardware appliances, and true software defined storage…

View original post 2,457 more words

Atlantis USX Data Services with Hyper-Converged Architecture – Web Scale, Virtual Volumes & In-line Deduplication

Atlantis USX has some very cool technology which I’ve had the pleasure to ‘play’ with over the past few weeks. In these series of posts I’ll attempt to cover the various technologies within the Atlantis USX stack.

The key technologies in the Atlantis USX In-Memory Data Services are:

  1. Inline IO and Data de-duplication
  2. Content aware IO processing
  3. Compression
  4. Fast Clone
  5. Storage Policies
  6. Thin Provisioning

This post focuses on Inline IO and Data de-duplication (or just dedupe for short) and Fast Clone and how these rich data services enable a hyper converged solution to outperform enterprise storage arrays.


Why would you use Atlantis USX?

The best way to approach this is to look at some use cases: Crazy as it seems, Atlantis USX delivers All-Flash Array performance but also gives five times the capacity of traditional storage arrays. Doing this with 100% software, no hardware appliances, and true software defined storage with software, enabling true web-scale architecture.

The majority of storage vendors today either do one of the other, not both. So you could end up with storage silos where IOPS are provided by an all-flash array and capacity is provided by a traditional SAN.


USX Use Cases

The three key Atlantis USX messages are:

  • Why buy more storage when you can do more with the storage you already have
    • Get up to 5X the capacity out of your existing storage array
    • Avoid buying any new storage hardware for the next 5 years
    • Reduce storage costs by up to 75%


    Use cases: Storage capacity running out in your current arrays.

    • Don’t buy another disk tray or array, free up capacity by leveraging Atlantis USX Inline Deduplication.
    • Get more capacity out of your all-flash array purchase – all-flash arrays (AFA) provide great performance but not great capacity, get 5X more capacity by using USX on-top of your AFA.



  • Accelerate the performance of your existing storage array
    • Deliver all-flash performance to applications with your existing storage at a fraction of the cost
    • Works with any storage system type – SAN, NAS, Hybrid, DAS


    Use cases: Current storage arrays not providing enough IOPS to your applications – place USX in front of your array and gain all-flash performance by using RAM from your hypervisor to accelerate and optimize the IO.


  • Build hyper-converged systems INSTANTLY without buying any new hardware
    • With RAM, local disk (SSD/SAS/SATA) or VMware VSAN on your existing servers
    • Don’t replace your servers of choice with alternative appliances
    • Use blade servers for hyper-converged infrastructure

    Use cases: Leverage existing investment in your compute estate by using USX to pool and protect local RAM and DAS to create a hyper-converged solution which can leverage both the DAS and any shared storage resources already deployed, including traditional SAN/NAS and VMware VSAN. Also use your preferred server architecture for hyper-converged, USX allows you to use both blade and rack server form factors due to the reduction in the number of disks required.


What if I want to do all of the above, all at the same time?

Well yes you can. And yes Duncan, we are doing this today (

You can get the benefits of rich data services coupled with crazy fast storage and in-line deduplication enabling immediate capacity savings today.


What is Inline IO and Data de-duplication?

In short, it is the ability to dedupe data blocks and therefore IO operations before those blocks and IO operations reach the underlying storage. Atlantis USX reduces the load on the underlying storage by processing IO using the distributed in-memory technology within Atlantis USX.

To demonstrate this, the blue graph below represents IOPS provided by USX to VMs. The red graph represents the actual IOPS that USX then sends down to the underlying storage (if it needs to). [The red graph would be for IO operations that are required for unique writes, however I won’t go into detail about that here in this post.]

Conversely, the same graphs can be used to show data de-duplication, just replace the IOPS metric on the y-axis with Capacity Utilization (GB) and you will also see the same savings in the red graph. Atlantis USX uses in-memory in-line de-duplication to offload IOPS from the underlying storage and to reduce consumed capacity on the underlying storage. I’ll show you how this works in the following labs below.


Examples in the lab

Let’s see some of these use cases in action in the lab.

Lab setup

  • 3 x SuperMicro servers installed with vSphere 5.5 U1b with 32GB RAM, 1 x SSD, 1 x SATA and some shared storage (which is not in use in this post) presented from an all-flash array (violin memory) and SAN (Nexenta) both over iSCSI.
  • Local direct attached storage (DAS) pooled, protected and managed by Atlantis USX.

Use Case 1: Building hyper-converged using Atlantis USX for VDI

In this use case I’ve created a hyper-converged system using the three servers and pooling the local SSDs as a performance pool and the local SATA drives as a capacity pool.

Memory is not used as a performance pool due to the servers only having 32GB of RAM. In a real world deployment you can of course use RAM as the performance pool and not require any SSDs altogether. I’ll use RAM in another blog post.

In the vSphere Client, these disks are shown as local VMFS5 data stores.

Pooling Local Resources

What USX then does is pool the SSDs into a Performance Pool and the SATA disks into a Capacity Pool.

Performance Pools

Atlantis USX pools the SSDs into a Performance Pool to provide performance. Performance Pools provide redundancy and resiliency to the underlying resources. In this example, where we are only using three servers, the RAW capacity provided by the SSDs are 120 x 3 = 360, however due to the Performance Pool providing redundancy, the actual usable capacity will be 66% of this, so 240GB is usable. This is the minimum configuration for a 3-node vSphere cluster. If you had a 4-node cluster then you will have the option to deploy a Performance Pool with a ‘RAID-10’ configuration. This will then give you 480GB RAW and 240GB usable. It’s really up to you to define how local resources are protected by Atlantis USX and by adding more nodes to your vSphere cluster and/or more local resources you can create hyper-converged infrastructure which is truly web scale.

Side note 1: an aside on web scale

Atlantis USX can pool, protect and manage multiple vCenter Servers and their resources. vCenter Servers can manage thousands of vSphere ESXi hosts. You can even create a Virtual Volume from resources which span over multiple ESXi servers, which are not in the same vSphere Cluster and not managed by the same vCenter Server. Heck, you can even use USX to provide the rich data services through Virtual Volumes which use multiple vsanDatastores (VMware VSAN). What I’m trying to say is that your USX Virtual Volume is not restricted to a vCenter construct and as such is free to roam as it is in essence decoupled from any underlying hardware. More on Virtual Volumes later.

Roaming Free
Roaming Free

Back to Capacity Pools

Atlantis USX pools the SATA disks into a Capacity Pool to provide capacity. Capacity Pools also provide redundancy and resiliency to the underlying resources. In this example, where we are only using three servers, the RAW capacity provided by the SATA disks are 1000 x 3 = 3000, however due to the Capacity Pool providing redundancy, the actual usable capacity will be 66% of this, so 2000GB is usable.

The resources from the Performance Pool and Capacity Pool are then used to carve out resources to Virtual Volumes.

Side note 2: a quick introduction to Atlantis USX Virtual Volumes

The concept of a Virtual Volume is not new, it was proposed by VMware back in 2012 ( and in more detail by Duncan here ( but since then has not really had the engineering focus that it deserves until now ( The concept is very straightforward – your application should not be dependent on the underlying storage for its storage needs.

Virtual Volumes is all about making the storage VM-centric – in other words making the VMDK a first class citizen in the storage world” – Cormac Hogan

Your application should be able to define its own set of requirements and then the storage will configure itself to accommodate the application. Some of these requirements could be:

  • The amount of capacity
  • The performance – IOPS and latency
  • The level of availability – backup and replication
  • The isolation level – single virtual volume container just for this application or shared between multiple applications of a similar workload

With Atlantis USX, Virtual Volumes have a storage policy which defines those exact requirements. Atlantis USX will provide the rich data services for the virtual volumes which can then be consumed by the application at the request of an Application Owner. Enabling self-service storage request and management for an application without waiting for a storage admin to calculate the RAID level and getting your LUN two weeks later. Is this still happening?

An Atlantis USX Virtual Volume is created from some memory from the hypervisor, some resource from the Performance Pool and some resource from the Capacity Pool. The Atlantis USX rich data services – inline data deduplication and content aware IO processing happens at the Virtual Volume level. The Virtual Volume is then exported by Atlantis USX as NFS or iSCSI (today. Object and CIFS very soon) either to the underlying hypervisor as a datastore or directly to the application. Think of a Virtual Volume as either a) an application container or b) a datastore – all with the storage policy characteristics as explained above and of course supporting all of the lovely vSphere, Horizon View, vCloud, VCAC features that you’ve come to love and depend on:

  • HA
  • DRS
  • vMotion
  • Fault Tolerance
  • Snapshots
  • Thin Provisioning
  • vSphere Replication
  • Storage Profiles
  • Linked Clones
  • Fast Provisioning
  • VAAI

Back to creating Virtual Volumes from Pools

In our example here, the maximum size for one Virtual Volume would be constructed from 240GB from the Performance Pool and 2000GB from the Capacity Pool. However, to take advantage of Atlantis USX in-memory I/O optimization and de-duplication, you would create multiple Virtual Volumes, one for a particular workload type. Doing so will make the most out of the Atlantis USX Content Aware IO Processing engine.

Let’s configure a single Virtual Volume for a VDI use case. I’ll create a Virtual Volume with just 100GB from the Capacity Pool and 5GB from the Performance Pool. We will then deploy some Windows 8 VMs into this Virtual Volume and see the Atlantis USX in-memory data deduplication and content aware IO processing in action.

Here’s our Virtual Volume below, configured from 100GB of resilient SATA and just 5GB of resilient SSD. Note that VAAI integration is supported and for NFS the following primitives are currently available: ‘Full File Clone’ and ‘Fast File Clone/Native Snapshot Support’.

[Dear VMware, how about a new ‘Drive Type’ label named ‘In-Memory’, ‘USX’, ‘Crazy Fast’?]

As you can see the datastore is empty. Very empty. The status graphs within USX currently show no IO offload and no deduplication. There’s nothing to dedupe and no IO to process.


Let’s start using this datastore by cloning a Windows 8 template into it. We will immediately see deduplication savings on the full clone after it is copied to our new virtual volume.


Here’s our new template, cloned from the ‘Windows 8.1 Template’ template above which is now located on the new usx-hyb-vol1 virtual volume.


The same graph below shows that for just that single workload, USX has been able to perform data de-duplication by 18%.


Let’s jump into Horizon View and create a desktop pool and use Full Clones for any new desktops, I’ll use the template named win8-template-on-usx as the base template for the new desktop pool and our new virtual volume usx-hyb-vol1 as the datastore.


Let’s see what happens when we deploy one new virtual machine via a full clone with Horizon View which uses an Atlantis USX Virtual Volume. Hint: The clone happens almost instantly due to the VAAI Full Clone offload to USX. We will also see the deduplication ratio increase and IO offload will also increase.


The Full Clone completes in about 9 seconds. Happy days!


The deduplication has increased to 63%! With just two VMs on this datastore – the template win8-template-on-usx and the first VM usx-vdi1.


Taking a look with the vSphere Client datastore browser again, we now see two VMs in the virtual volume which are both full VMs, not linked clones.


Two Full VMs, only occupying 8.9GB.


Let’s now go ahead and deploy an additional 5 VMs using Horizon View.


All five new VMs are provisioned pretty much instantly as shown in the vSphere Client Recent Tasks pane.


Checking the Atlantis USX status graphs again, the deduplication ratio has increased to 88%.


And we now see 6 Full Clones and the template in the datastore but still just consuming 10.57GB.


Additionally because the workloads are pretty much exactly the same, with all six VMs deployed and running in the usx-hyb-vol1 Virtual Volume and with Atlantis USX in-memory Content Aware IO processing, IO and data de-duplication, the IO Offload is pretty much at 100%. This will decrease accordingly as users start using the virtual desktops and more unique data is created but Atlantis USX will always try to provide all IO from the Performance Pool (RAM, Flash or SSD).


No storage blog post is complete without an Iometer test

Let’s do a VDI Iometer profile with 80% writes, 20% reads at 80% random with 4k blocks using the guide from Jim (


Here’s the result:

55k IOPS (fifty five thousand IOPS!) and pretty much negligible read and write latency on just three vSphere ESXi hosts. To put that into context, if I deployed one hundred Windows 8 VDI desktops into that Virtual Volume, each desktop (and therefore user) would basically have 550 IOPS. You can read more about IOPS per user in this post by Brian Madden ( To put this IOPS number into further context, that Virtual Volume is configured to use just 10GB of RAM from the hypervisor, 5GB of SSD and 100GB (of which only 10.57GB is in use, which is a 88% capacity saving) of super slow SATA disks in total over the three vSphere ESXi hosts. If you want more IOPS, you just need to create more Virtual Volumes or add more ESXi hosts to scale out the hyper-converged solution.

In other words… crazy performance on hyper converged architecture with just a few off-the shelf disks on a few servers. No unicorns or magic in Atlantis USX, just pure speed and space savings. BOOM!



To summarize, Atlantis USX is a software-defined storage solution that delivers the performance of an All-Flash Array at half the cost of traditional SAN or NAS. You can pool any SAN, NAS or DAS storage and accelerate its performance, while at the same time consolidating storage to increase storage capacity by up to five times. With Atlantis USX, you can avoid purchasing additional storage for more than five years, meet the performance needs of any application without buying hardware, and transition from costly shared storage systems to lower cost hyper-converged systems based on direct-attached storage as I’ve demonstrated here.

In part 2. I’ll use local RAM instead of SSDs and in part 3. I’ll demonstrate how Atlantis USX can be used to get more capacity and IOPS from your current storage array.

Cannot decrypt password in sysprep after upgrading vCenter Server Appliance

A really quick post.

I’ve recently just upgraded from vCSA 5.1 to vCSA 5.5 link and found that Horizon View can no longer complete sysprep customization due to the public key being changed when you upgrade to a new appliance.

cannot decrypt password

Just edit the customization specification to fix. Hope this helps.

Accelerating SDDC at Atlantis Computing

Sometimes an opportunity comes along that is just too damn exciting to pass.

This is a short post on my latest move to Atlantis Computing from Canopy Cloud. My new role is primarily with the USX team to help drive the adoption of USX into large Enterprises and Service Providers. USX is Atlantis Computing’s newest technology which does for server workloads what ILIO did for EUC. Quite simply my job is to make USX a success. I’ll be helping the virtualization community understand Atlantis Computing’s USX and ILIO technologies, working with customers and partners and also with our technology partners, such as VMware, NetApp, VCE, Fusion-IO and IBM. Even though USX is new, the technology is based on ILIO which has been shipping since 2009 and is powering the largest VDI deployments in the world.

Today was officially my first day and it was a pretty interesting one. It started with a customer meeting with a large bank in London and then to BriForum, both in a listening capacity but I couldn’t help myself and ended up talking  about both ILIO and USX to some techies at the bank and then some people that came to the stand at BriForum. There is definitely hot interest with using RAM to accelerate storage in both EUC and server workloads.

Atlantis Computing’s ILIO and USX technologies are truly software defined and in simple terms enables the in-line optimisation of both IOPS and capacity BEFORE the IOPS and blocks hits the underlying storage. For example the blue graph represents IOPS to the storage array for 200 VDI VMs without ILIO, the red graph represents IOPS to the same storage array with ILIO, a saving of 80%.


In addition because storage is deduped in-line, there is also massive capacity savings on the underlying storage too. Dedupe occurs in-line, there is no requirement for dedupe to blocks written to disk as data is deduped before being written to disk, hence no overhead caused by a dedupe job on the storage processor or spindles.

In-line de-duplication is not the only capability within the Atlantis Computing technology, some of the others are:



I won’t go into each one in this post, I’ll save that for another day. I’m very excited with my new role at a new company and hope to blog a lot more often as I learn more about Atlantis Computing and of course storage virtualization and optimisation in general.

If you want to read more, some of these resources help explain the tech. Oh and we offer a completely free ILIO license for use in POCs/Lab environments, be sure to check it out!

#VMwarePEX parties

Quick post to list all the parties and tweetups that are happening this week.

Day Time Venue Details
Saturday 1830 – late vBeers @ Ri Ra Irish Pub, Mandalay Bay ResortThe Shoppes at Mandalay Bay Place, 3930 Las Vegas Blvd South, Las Vegas, NV



Sunday 2100 – late Community Tweetup @ The Burger Bar
Mandalay Place is located in the mall between Mandalay Bay & Luxor.
3930 Las Vegas Boulevard S. #121A
Las Vegas Nevada. 89119
Not sponsored by organised by @CommsNinja, @hansdeleenheer and @mjbrender



Monday 1700 – 1900 Welcome Reception @ Solutions Exchange Kick off VMware Partner Exchange 2013 at the Welcome Reception. The Weclome Reception is a great opportunity to explore the Solutions Exchange, check out cool products and solutions, and interact with peers, partners and VMware teams. Sponsored by EMC.
Signup for #VMwareTweetup, taking place 5:30-7:30 in the Hang Space of the Solutions Exchange (same time as the Welcome Reception) to network with peers and to learn about VMware Link, the new social collaboration platform for VMware Partners! Later, you can also join the #PEXTweetup, an “unofficial” offsite sponsored tweetup for the community.
1900 – late Unofficial Tweetup @ Nine Fine Irishmen at New York, New York, 3790 S Las Vegas Blvd – Las Vegas, NV Unofficial Official Community Tweetup sponsored by HP Storage and Veeam.
Tuesday 1630 – 1830 Hall Crawl @ Solutions Exchange Grab a drink and discover new technologies while connecting with new partners and other attendees in the Solutions Exchange!
1730 – 1930 vExpert and VCDX Reception @ Ri Ra Irish Pub, Mandalay Bay Resort vExperts and VCDXes by invitation only.
1900 – 2200 VMware Partner Awards reception & dinner.
Breakers, South Convention Center Level 2.
Invitation only.
Wednesday 1930 – 1030 Partner Appreciation Party Join your colleagues at the Partner Appreciation Lounge in the Mandalay Ballroom! The evening will kick off with the club sounds of DJ Mike Attack and a lounge-style buffet, beer and wine. Then later, Third Eye Blind will take the stage with hits like “Jumper”, “Semi-Charmed Life”, and “Graduate”!

2012 in review

2012 summary of VMwire, not too bad although I did not blog much this year. Will try to do more in 2013. Thanks for visiting.

The stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:

About 55,000 tourists visit Liechtenstein every year. This blog was viewed about 250,000 times in 2012. If it were Liechtenstein, it would take about 5 years for that many people to see it. Your blog had more visits than a small country in Europe!

Click here to see the complete report.

VMworld Session Proposal(s) – not a single PowerPoint slide in sight!

Please influence the success of VMworld by spending some time to vote for the sessions that you would like see at San Francisco and Barcelona. Voting is as simple as a left mouse click, by going to

This year I decided to submit three sessions for VMworld based on work that I have done over the past few months.

However, only one of which is available for public voting, the other two, unfortunately, are deemed top secret and cannot be disclosed until VMworld. Let’s hope they make it as they are different and focussed on real-life use cases and customer design considerations of product features based on VMware’s upcoming releases. Get your cool-aid ready.

Session ID:   2335

Title:   Bring Your Desktop to Your Mobile – Bringing EUC to the User

Abstract:   With EUC becoming more prevalent in organizations that demand agile, mobile and secure client computing, the use of thin clients and all in one devices are ever becoming the normal operating model of organizations deploying EUC.

The use of mobile devices such as smartphones to access VMware View desktops could be the option going forward.

Let’s bring EUC to the user by allowing the user to access secure VMware View sessions on their own devices eliminating the need for organizations to manage the thin client devices.

Tracks: End-User Computing

Technical Level: Business Solution.

This session focuses on the possibilities of using Horizon Mobile to allow secure computing from mobile smartphone devices (cell phones). I’ve briefly blogged about it in my previous post to give you a taster. If the session is accepted, I’m hoping to make it stand out by including gadgetry, big screens and the like for a live demonstration with a little help from some friends. There won’t be any PowerPoint that’s for sure!

Cloud computing gets a lot more personal

I’ve just bought the biggest smartphone that I could find and have been using it for the past couple of weeks with great results. I’ve had both admiring looks and a few sniggers due to its size. It’s kind of a cross between a tablet and a phone.

I’ve never put it up to my ear however, as I think it’s a bit too much, I use a hands free kit instead. I don’t really want to be seen looking like this now do I?

At the moment I’m really happy with my purchase because it means that not only do I have a new phone, I now have a phone with a big screen and cool functionality. One of the reasons I decided to go for such a hybrid is so that I can read e-books on it without squinting to see the text.

It also means that I do not have to take my iPad around with me when I travel, which means one less device to manage. So how is this related to the blog post title you may ask? Well, I wanted to take this a little further to see if I can use only my mobile phone as my primary computing device. I say primary but this little guy still needs help from his friends in the cloud. So I thought wouldn’t it be cool if I could hook up my phone to an external monitor, connect some peripherals and see what happens…

Well this is the result:

The image above shows my Galaxy Note connected to a 24″ monitor using a HDMI cable for full 1080p resolution. I’ve connected my Apple Bluetooth keyboard and Magic Mouse to it, and also installed VMware View Client for Android. It’s running a VMware View session using PCoIP over a WIFI connection to my View desktop in one of VMware’s datacentres. How awesome is that?

So why would you want to do this? Well, for one thing it’s pretty cool, the simplicity and usability is amazing and it feels quite natural. Why wouldn’t you use a small personal device such as a mobile phone as a thin client for accessing cloud resources such as a remote desktop hosted on VMware View?

It’s simple yet solves quite a few issues regarding end user access points. We’ve all seen those reports and calculators that justify thin client devices over traditional fat PCs. I’m not an EUC/VDI guy so I just typed “cost of thin client” into Google and went to to take a look at the report.

A report by Bloor Research states that moving over to thin client computing could save costs of up to 70%. I’m going to be a little lazy and quote directly from the web page:

*1 Explanation of savings on administration

These were calculated at $1000 per PC. Many research studies indicate that the amount is between $800 and $1,700 per year. Beyond day-to-day maintenance of installation of patches, software upgrades, etc, there is also the 3 year upgrade cycle which requires an administrator to move all the data and profiles to the new PC. On average this will cost $300 per PC, making for an additional cost of $50 per year (over a 6 year period). Since administration is simplified, an enterprise will require fewer IT staff to perform the same number functions. This means lower training costs and fewer salaries to pay. Bloor Research estimates that the number of helpdesk staff needed can be reduced typically by 50% and often by 75%.

*2 Explanation of savings on client hardware

These were calculated to be $208 per PC per year. You can get an adequate thin client for $250, in contrast with the average price for a PC of about $750 – this results in a saving of $500. Since PC hardware has to be upgraded approximately every 3 years as opposed to a thin client which only needs to be replaced every 6 years, the savings increase to $1250 over a span of 6 years ($1500 spent on 2 PCs as opposed to $250 on 1 thin client device). This amount is then divided by 6 to calculate a yearly saving. If you are using existing PCs instead of thin clients, the hardware savings can still be applied because you would be extending the life span of the converted computers. Furthermore, the MTBF of a thin client device is higher and it uses far less energy.

*3 Explanation of extra server hardware costs

These were calculated at $50 per user. Because all processing is done on the server, when using thin clients you will need to buy additional servers to act as terminal servers. On average 30 users will need a dual processor server with 4 gigs of RAM and SCSI hard disks. A brand name server should cost around $4,500 and will depreciate on average in 3 years (in reality you can use them for longer than that).

So that’s a 70% saving according to Bloor Research for just using thin clients over traditional PCs. But hang on, what about further savings? How about ditching the thin client concept altogether and allow users to use their smartphones?

With the popularity of BYOD (bring your own mobile device: expense the monthly costs for calls and line rental) programs, could be the coup de grâce for thin clients everywhere. Most smartphones nowadays are a lot more powerful than the average thin client and for the average office application and e-mail worker, a smartphone may be just the right device to use.

Some other benefits I see since using my smartphone to access my View Desktop:

  • It’s my device, I look after it, I clean it, I never spill coffee on it. No one else can touch it. It’s my personal device so I sure as hell am going to take care of it. Do you ever clean your thin client or work computer?
  • I can take it with me when I go to make coffee, or to the printer, or to a meeting. My office and most of my customer’s offices have WIFI everywhere, so my View session does not disconnect. And when I return to my desk, I just plug the HDMI cable back in and everything is still there. No work is lost as everything just resumes.
  • I can take my device anywhere, it’s a smartphone, it’s got my e-mail, calendar, messages, contacts, Twitter and a web browser. I can use it to communicate when I’m out of the office, I can continue working when I’m out and about. And when I return to my desk or home, I can just reconnect it to an external monitor and paired input devices and my session is still there and I can continue where I left off.
  • It’s secure, no-one is going to attempt to log into my session if there’s nothing to log in to! I don’t even have to ‘lock my computer’ anymore, as it’s safely secured in my jacket pocket.
  • Oh and it can still make and receive calls.

Coupled with VMware Horizon Mobile, I think we are onto a sure winner. Click on the image below to watch a short video of what Horizon Mobile is all about.

Let’s just see if this little idea kicks off and makes 2012 the year of VDI… again.

Eye candy below… Comments always welcome, video guide to follow.


Uploading vShield Manager 5.0.1 to vCloud Director as a vApp Template

A quick post on how to enable the import of vShield Manager 5.0.1 OVA as a vApp Template into vCloud Director. This will allow you to spin up vCloud Director labs inside of vCloud Director for some crazy inception action.

Note: that this method can be used for other appliances.

As you know if you downloaded vShield Manager from VMware, the file format would be in OVA format, which is not compatible with vCloud Director.

This post goes through some of the steps required to

  • Convert the OVA to OVF
  • Edit the OVF to remove vCloud Director unsupported features (vmw:ExtraConfig)
  • Create a new manifest file with the new SHA-1 hash

What you will need

  1. VMware OVF Tool available to download here
  2. Notepad++ available to download here
  3. A SHA-1 generator available online here

Converting OVA to OVF

Once you’ve downloaded the VMware-vShield-Manager-5.0.1-638924.ova file, use the VMWare OVFTool to convert it to OVF format.

Open up the command prompt and run the following, assuming that the ova file is saved in C:\Users\Hugo Phan\Downloads\

C:\Program Files\VMware\VMware OVF Tool>ovftool.exe “c:\users\Hugo Phan\Downloads\VMware-vShield-Manager-5.0.1-638924.ova” “C:\Users\Hugo Phan\Downloads\VMware-vShield-Manager-5.0.1-638924.ovf”

The following files will then be extracted within the directory



Editing the OVF file to be compatible with vCloud Director

If you now tried to use the current .ovf file to upload vShield Manager into VCD as a vApp Template, you will see the following error:

We need to remove the vmw:ExtraConfig elements from the .ovf file. To do this follow these instructions:

  1. Open the VMware-vShield-Manager-5.0.1-638924.ovf file in Notepad++ or your preferred text editor that does not add carriage returns.
  2. Search for the three vmw:ExtraConfig lines and remove them from the file.

  3. Save your file and exit Notepad++.
  4. Now visit and upload the VMware-vShield-Manager-5.0.1-638924.ovf file and click on the Calculate Hash button.

  5. When you see the message You hash has been successfully created, copy the top lower case hex hash and open up the file in Notepad++

  6. Replace the current hash for VMware-vShield-Manager-5.0.1-638924.ovf with the new one.

  7. Save the file.
  8. Now you can successfully upload the new VMware-vShield-Manager-5.0.1-638924.ovf to vCloud Director without the error occurring.

Creating (a better) vSphere 5 ESXi embedded USB Stick (HP)

In a previous post I blogged about creating a vanilla vSphere 5 ESXi USB drive using the VMware .iso file from VMware. This post shows how to create one using the HP version of vSphere ESXi (5.0_Oct_2011_ESXi_HD-USB-SDImgeInstlr_Z7550-00253.iso).

Note: (You can use any vendor customized vSphere ESXi .iso file: VMware, Dell and IBM).

The HP version comes pre-installed with all the HP CIM providers which work very well with HP servers, including the HP MicroServer. Using the HP version gives you the more details in the Hardware Status tab.

I’m going to be using a different method, recommended by Will Rodbard (thanks Will), who is a colleague of mine at VMware, you can see his comments from the previous post. In summary the steps are:

  1. Find and download the following tools:


  2. Run the HPUSBFW tool, click on the USB drive, select ‘Fat32′ and click Format
  3. Run UNETBOOTIN, select Diskimage and browse to the ESXi 5 ISO file
  4. Select the USB drive you have just formatted and click OK
  5. If you want to make more USB keys for more servers, then now is the time to create .IMG files using WinImage, then you can basically clone the image of the USB key to more USB keys. Or if you don’t wish to use WinImage then just perform steps 1 to 4 again.

Once completed your USB drive will boot into the ESXi 5 installer. Once booted, install the ESXi 5 Hypervisor to the USB drive (overwriting the installer). This will then leave you with the installed ESXi Hypervisor on the USB.

Note that using this method creates a brand new bootable USB key for use in a new installation of vSphere ESXi. You will have to go through the process of installing ESXi onto the USB key, or another disk or LUN on the target server. If you want a USB key that is already installed with ESXi which saves you from going through the installation wizard, you can use the other method in this post.


I coincidently left an older USB key in my laptop and booted. Here’s a picture of my Macbook Pro running vSphere ESXi, and it all works by the way, including networking!

Configure NFS Storage on the VMware vCenter Server Appliance

This post highlights some best practices on the management of the vCSA log and core files. VMware recommends that these files are stored on an NFS share external to the vCSA due to the possibility of the default log and core locations filling up.

When this happens, vCenter services will be impacted.

For more information about the vCSA, please see the resources listed here

There may be trouble ahead

This screenshot shows what happens when this is not done, the partitions for /storage/core will fill up over time and will impact the availability of vCenter Server.

Figure 1 – Local core storage full!

Configuring NFS storage on the vCSA

You can add the NFS shares for the log and core files by logging into the VMware Studio management interface of the vCSA, normally https://<vcsa>:5480.

The default username and password is root | vmware.

Click on the vCenter Server tab, and then click on Storage.

Figure 2 – Configuring NFS storage on the vCSA

Using the correct syntax for the NFS storage

The correct syntax for adding the storage is


So if my NFS_Server is and my NFS_Export is /mnt/vg01/vcsa_core/vcsa_core/, I would enter the following in the box for NFS share for core files:

Make sure that the NFS export on the NFS Server is configured with a UID/GID mapping of no_root_squash. For example, use the command on the NFS server:

exportfs -vo rw,no_root_squash,sync :/mnt/vg01/vcsa_core/vcsa_core/

Once done, click on Test Settings to verify that the vCSA can successfully store files to the specified NFS shares, then click on Save Settings, then restart the vCSA.

Browsing to the NFS storage

You can also see what is created in the NFS share if you listed the contents of the core files share.

Figure 3 – Core logs

You can also see what is created in the NFS share if you listed the contents of the log files share. The screenshots below show the directory structure on the NFS server. On the vCSA the directories are mounted at /storage.

Figure 4 – All other Logs

Adding sysprep packages to the VMware vCenter Server Virtual Appliance

The VMware vCenter Server Appliance (vCSA) is a Linux version of the vCenter Server, this post discusses the placement of the System Preparation tools (sysprep) packages within the vCSA and how to make the contents of the DEPLOY.CAB file available. Once configured, it is possible to use Guest Operating System Customizations with the vCSA.

My previous posts provide further detail around the features and benefitsfeature parity with the Windows vCenter Server, how to quickly deploy the vCSA and how to configure an external Oracle database for larger deployments.

For more information about the vCSA, please see the resources listed here

The location of the sysprep directory on the vCSA is located in


To get to this location, use a SSH client like WinSCP or FileZilla. The vCSA comes pre-configured with sshd, so no further action needs to be taken here.

Login as root | vmware

You’ll see the following folder structure within the /etc/vmware-vpx/sysprep/ directory:







Note that Vista, Windows 2008 and Windows 7 are not listed, this is because sysprep is built into those operating systems and vCenter can already leverage this. Guest Operating System Customizations with the vCSA is also supported with Linux operating systems out of the box (no configuration to the vCSA is required), although sysprep is obviously not required, please see the Guest OS Customization Support Matrix for supported Linux distributions.

Follow the vSphere Virtual Machine Administration Guide for instructions on extracting the necessary sysprep files, these files can be found in the DEPLOY.CAB file. If you’re migrating from the Windows vCenter Server to the vCSA, just copy the above directories over.

To obtain the sysprep files, you can use the installation CD/DVDs for each operating system or use the following links to download them (these links are detailed in VMware KB1005593):

Windows Version vCSA Sysprep Directory Sysprep Version
Windows 2000 Server SP4 with Update Rollup 1


The updated Deployment Tools are available in the Support\Tools\ file on the Windows 2000 SP4 CD-ROM. To download this file, visit the following Microsoft Web site:

/etc/vmware-vpx/sysprep/2k 5.0.2195.2104
Windows XP Pro SP2

/etc/vmware-vpx/sysprep/xp 5.1.2600.2180
Windows 2003 Server SP1




Windows 2003 Server SP2




Windows 2003 Server R2




Windows 2003 x64 /etc/vmware-vpx/sysprep/svr2003-64



Windows XP x64 /etc/vmware-vpx/sysprep/xp-64



Windows XP Pro SP3

/etc/vmware-vpx/sysprep/xp 5.1.2600.5512

Guest Operating System Customization Requirements

Guest operating system customization is supported only if a number of requirements are met.

VMware Tools Requirements

The most current version of VMware Tools must be installed on the virtual machine or template to customize the guest operating system during cloning or deployment.

Virtual Disk Requirements

The guest operating system being customized must be installed on a disk attached as SCSI node 0:0 in the virtual machine configuration.

Windows Requirements

Customization of Windows guest operating systems requires the following conditions:

  • Microsoft Sysprep tools must be installed on the vCenter Server system.
  • The ESXi host that the virtual machine is running on must be 3.5 or later.

Linux Requirements

Customization of Linux guest operating systems requires that Perl is installed in the Linux guest operating system.

Guest operating system customization is supported on multiple Linux distributions.

Verifying Customization Support for a Guest Operating System

To verify customization support for Windows operating systems or Linux distributions, see the Guest OS Customization Support Matrix.

A look at VMware vCloud Director Organization LDAP Authentication Options

VMware vCloud Director can use three different authentication mechanisms for subscriber authentication to the VCD portal. The portal is accessed using the URL https://<cloud-url>/cloud/org/<organisation>. In this post, I’ll try to highlight some of the authentication options that a subscriber can use to access the VCD portal.

Supported LDAP Services

Platform LDAP Server Authentication Methods
Windows Server 2003 Active Directory Simple, Simple SSL, Kerberos, Kerberos SSL
Windows Server 2008 Active Directory Simple
Windows 7 (2008 R2) Active Directory Simple, Simple SSL, Kerberos, Kerberos SSL
Linux OpenLDAP Simple, Simple SSL

VCD LDAP Options

A provider can configure a subscriber to use three different authentication mechanisms as highlighted by Figure 1.

Figure 1 – VCD LDAP Options

  1. Do not use LDAP (also known as local authentication)

    This is the simplest authentication method, selecting this radio button when configuring a new Organization will not use any kind of LDAP service. Instead, new users will need to be configured using the VCD GUI or the VCD API, and these users will be stored within the VCD database. Some of the disadvantages when using the local authentication are:

  • Groups cannot be used
  • A minimum length of 6 character only
  • No password complexity policies
  • No password expiration policies
  • No password history
  • No authentication failure controls
  • No integration with enterprise identity management systems
  1. VCD system LDAP service

    Selecting this will force the Organization to use the same LDAP service as the LDAP service that is used by the VCD system (Provider). Although, a separate OU can be used for each Organization, this is not the ideal model to use for large cloud deployments. Some of the disadvantages when using the VCD system LDAP service are:

  • Organizations must use the same LDAP service as the Provider.
  • Although separate OUs can be used, Organizations may not want to have their Users and Groups managed by the Provider.
  • Organizations may not want to share the same LDAP service with another Organization, even if separate OUs are used.
  • No self-service of the LDAP service by each subscriber is possible unless complex access is setup for each subscriber to their respective OU.
  1. Custom LDAP service

    Selecting this will allow the Organization to use its own private LDAP service. What this means is for each Organization, a completely separate and unique LDAP service can be used for that Organization, an Organization does not need to use the same service as the VCD system but can use its own LDAP service. This can be a completely separate unique Active Directory Forest for example, with no network links to any other AD Forest.

VCD System LDAP Service

Consider this following example:

I run a Public Cloud so I am a Provider of cloud services, my VCD system authenticates to a Microsoft Active Directory Forest with a domain name of HUGO.LOCAL. This allows me as a System Administrator to log into my VCD portal as a user on HUGO.LOCAL.

As the System Administrator, I first configure an LDAP service for the VCD System:

Figure 2 – VCD System LDAP

Then, a new Security Group called SG_VCD.System.Administrators is created in the HUGO.LOCAL domain, with the user HUGO.LOCAL\HPhan as a member of that group.

Figure 3 – VCD System Administrators Group

The new Security Group SG_VCD.System.Administrators is then added to the System Administrator role in VCD.

Figure 4 – Import LDAP group into VCD role

Now I can log into my cloud as a System Administrator with my domain user HUGO\HPhan.

Figure 5 – System LDAP

Organization Custom LDAP Service

So pretty easy and straightforward so far right? What happens when a subscriber comes along and wants to use my cloud services? Let’s do another example.

A new organization let’s say Coke, wish to use their own LDAP service to authenticate with the VCD portal. In much the same way as how the System LDAP was configured, an Organization LDAP service is configured in similar ways.

As a System Administrator, I first configure a LDAP service for the Coke Organization, instead of using the HUGO.LOCAL LDAP service, I now direct this Organization’s LDAP service to a unique LDAP service for Coke. This can be a LDAP service hosted by me (the Provider) and managed by Coke (think co-lo), or a LDAP service managed by Coke in Coke’s datacentres (think MPLS/IPVPN):

Figure 6 – Organization LDAP

Then a new Security Group called Organization Administrators is created in the COKE.LOCAL domain, with the user COKE.LOCAL\John.Smith as a member of that group.

Figure 7 – VCD Organization Administrators Group and Members

The new Security Group Organization Administrator is then added to the Organization Administrator role in Coke’s Organization.

Figure 8 – Assign LDAP Group to VCD Role

John Smith can log into the Coke Organization as an Organization Administrator with the domain user COKE\John.Smith.

Figure 9 – LDAP User logged into VCD

So what happens when another Organization joins the party? Extending our example above, let’s say Pepsi also want to use my cloud services. In much the same way that the Coke Organization is configured to use its own LDAP service, we do the same for the Pepsi Organization – an Organization Administrator group is created in the PEPSI.LOCAL domain, and a user named Peter.Smith is a member of that group, Peter Smith can also log into Pepsi’s Organization as an Organization Administrator.

Figure 10 – Another LDAP User logged into VCD

In Summary

In summary the provider will use the System LDAP, all other (subscribers) Organizations could also use the System LDAP (either with a separate OU or not) if required, however, you can also configure each Organization to use its own LDAP Service.

  • We have a Provider which uses the domain HUGO.LOCAL to authenticate the System VCD, with the Active Directory Security Group SG_VCD.System.Administrators having the System Administrator role in VCD and my account HUGO\HPhan is a member of this group.
  • We have subscriber 1 with an Organization named Coke Co, and this organization uses its own LDAP service which is backed by a domain COKE.LOCAL.
  • We have another subscriber, subscriber 2 with an Organization named Pepsi Co, and this organization uses its own LDAP service which is backed by a domain PEPSI.LOCAL.
  • Provider – Uses HUGO.LOCAL – System LDAP
  • Subscriber 1 – Uses COKE.LOCAL – Custom LDAP
  • Subscriber 2 – Uses PEPSI.LOCAL – Custom LDAP
  • There is no trust between the Provider LDAP or any Subscribers’ LDAP required.
  • More importantly, there is no trust and no network connectivity between any of the subscriber’s LDAP systems.

Securing Custom LDAP Services

For each Organization, a single LDAP Service for that Organization will need to be configured as a Custom LDAP to authenticate to. To enable this functionality, the vCloud Director Cell must be able to connect to ALL LDAP servers over TCP 389 or 636. The VMware vCloud Security Hardening Guide gives good recommendations on how Service Providers can host Subscribers’ LDAP servers and also how to maintain connectivity to Subscribers’ LDAP servers if hosted remotely over MPLS/VPN etc.

It is therefore important that the vCD Cell is secured and network connectivity to each organization’s LDAP services are also secured. The following extract from the VMware vCloud Security Hardening Guide explains the connectivity options for subscriber’s LDAP services:

Connectivity from the VMware vCloud Director cells to the system LDAP server and any Organization LDAP servers must be enabled for the software to properly authenticate users. As recommended in this document, the system LDAP server must be located on the private management network, separated from the DMZ by a firewall. Some cloud providers and most IT organizations will run any Organization LDAP servers required, and those too would be on a private network, not the DMZ. Another option for an Organization LDAP server is to have it hosted and managed outside of the cloud provider’s environment and under the control of the Organization. In that case, it must be exposed to the VMware vCloud Director cells, potentially through the enterprise datacenter’s own DMZ (see Shared Resource Cloud Service Provider Deployment above).

In all of these circumstances, opening the appropriate ports through the various firewalls in the path between the cells and the LDAP server is required. By default, this port is 389/TCP for LDAP and 636/TCP for LDAPS; however, this port is customizable with most servers and in the LDAP settings in the Web UI. Also, a concern that arises when the Organization is hosting their own LDAP server is exposing it through their DMZ. It is not a service that needs to be accessible to the general public, so steps should be taken to limit access only to the VMware vCloud Director cells. One simple way to do that is to configure the LDAP server and/or the external firewall to only allow access from IP addresses that belong to the VMware vCloud Director cells as reported by the cloud provider. Other options include systems such as per-Organization site-to-site VPNs connecting those two sets of systems, hardened LDAP proxies or virtual directories, or other options, all outside the scope of this document.

Figure 11 – Multiple Custom LDAP in VCD

Note: The use of Coke and Pepsi are used as an example of multi tenancy within a public cloud and the use of the names on this blog are for information purposes only.

Configuring vCenter Server Virtual Appliance to use an Oracle database

In previous posts I blogged about what the vCenter Server Virtual Appliance (vCSA) is, its features and benefits, feature parity with the Windows vCenter Server and also how to quickly deploy the vCSA. For more information about the vCSA, please see the resources listed here

This post extends the series with how to configure an external Oracle database for use by the vCSA.

Why use an Oracle database?

The vCSA comes preinstalled with an embedded DB2 database which has similar use cases as the Windows vCenter Server when configured with SQL Express – intended for small deployments of 5 ESX/ESXi servers or less. The ability for the vCSA to utilise an external Oracle database allows customers to scale and manage larger vSphere infrastructures equivalent to environments with Windows vCenter Servers backed by SQL or Oracle databases.

This post shows how quickly and easily it is to use an external Oracle database instead of the embedded DB2 database. Hopefully you’ll see the benefits of how much quicker it is to configure the Oracle connectivity between the vCSA and the Oracle server vs installing the Oracle 64-bit Client onto a Window Server and configuring tnsnames.ora, followed by configuration of ODBC settings.

Configure an Oracle Database and User

  1. Log into SQL*Plus session with the system account. I’m using Oracle 11g R2 x64 on Windows Server 2008.
    C:`>sqlplus sys/<password> as SYSDBA
  2. Run the following SQL commands to create a vCenter Server database. Note that your directory structure may be different.


  3. Run the following SQL command to create a vCenter Server database user with the correct permissions. I will create a new user named “VPXADMIN” with a password of “oracle”.
    grant connect to VPXADMIN;
    grant resource to VPXADMIN;
    grant create view to VPXADMIN;
    grant create sequence to VPXADMIN; 
    grant create table to VPXADMIN; 
    grant create materialized view to VPXADMIN;
    grant execute on dbms_lock to VPXADMIN;
    grant execute on dbms_job to VPXADMIN;
    grant select on dba_tablespaces to VPXADMIN;
    grant select on dba_temp_files to VPXADMIN;
    grant select on dba_data_files to VPXADMIN;
    grant unlimited tablespace to VPXADMIN;

Configure the vCSA

  1. Log into the vCSA VMware Studio management interface at https://<vcsa>:5480/
  2. Navigate to the vCenter Server tab, then click on Database.
  3. Select oracle as the Database Type using the drop-down menu and enter your environment information into the fields and then click on Save Settings. Note how easy that was, no messing about with installing the Oracle Client, no need to configure tnsnames.ora and no need for any ODBC configuration either.

  4. Wait for around 5 minutes for the vCSA to create the database schema.
  5. Now it’s safe to start the vCenter services, navigate to the Status tab and click on Start vCenter.

  6. You can then start using vCenter when the Service Status reports as Running.

Cleaning up the Oracle configuration

After you’ve tested that everything is working, you can revoke the following privileges using SQL*Plus again.

revoke select on dba_tablespaces from VPXADMIN;
revoke select on dba_temp_files from VPXADMIN;
revoke select on dba_data_files from VPXADMIN;

Total configuration time ~approx 10 minutes.


vSphere Installation and Setup Guide

My VCDX Journey – 5 simple steps to VCDX

I’ve just recently been awarded the VCDX4 certification after completing my defence in Frankfurt. It is part of the final stage in the VCDX certification culminating in a journey over the past year. Defence experiences have been shared by others such as Duncan Epping, Jason Boche, Scott Lowe and Kenneth van Ditmarsch and I found that mine was very similar so this is a post on how I prepared for my VCDX and by careful planning how it can be achieved within 12 months.

For information regarding the VCDX certification path, please see the VCDX page on

First a quick thanks to all those that helped in true Oscar style, namely Steve Byrne my manager at VMware for supporting my journey, my colleagues at VMware for your help with the mock panels, you were awesome – @simonlong_, @repping, @ady189, @baecke & John Pollard. A shout out to @frankdenneman for the motivational support and advice.

Fail to plan? Then plan to fail, preparation is key, so this was how I planned my journey in 5 easy steps.

Step 1 – Gain support from your employer and family

This is critical as the certification path is not an easy one, there is a minimum of one course to attend (vSphere ICM), three exams (VCP, VCAP-DCA, VCAP-DCD) and fees for the VCDX submission and defence. Not to mention the expenses of travelling to the defences themselves. It’s also good to agree time to study, work on your defence materials as well as any time you need to actually attend the defence. Remember that taking time out to study and prepare would mean your company would take the hit on your productivity. So having a mutual agreement benefits all.

Support from your family is also a must as it will be a huge investment in your time.

Step 2 – Set clear objectives

Sit down with your manager and discuss clear objectives that are SMART. Agree on what your objectives are, and plan to achieve them. An example:

Objective Estimated Completion Date Resources
VCP Q1 ICM course, lab practice
VCAP-DCA Q2 Courses (optional), lab practice
VCAP-DCD Q3 Design Workshop (optional), read PDFs, lab practice
Create a vSphere Design Q2-Q3 Work on real design for a customer with real world requirements and use this as your VCDX submission
Complete VCDX Submission Q4 Choose a VCDX defence date and aim to submit your VCDX materials in time

Step 3 – Keep a track of your progress

Remember to keep a track of your progress, if you pass the exams, share the news with your team, it keeps you motivated. If you fail, then your timeline objectives may need tweaking. Keep your manager in the loop with progress, as ultimately, funding needs to come from somewhere for your fees and expenses right?

Step 4 – Work on your VCDX materials and then submit

Read the VCDX requirements and register your intention to pursue the VCDX on myLearn and make sure that you meet all the requirements before sending in your submission. Make sure to get some colleagues to review your documents first.

If everything goes well, your submission may well be accepted by VMware and you’re invited to defend.

Step 5 – Prepare for your defence

At this stage you should have been invited to defend. This is the most critical stage of the process, all the work that you’ve done so far has ultimately come down to this. So no pressure.

There are many ways to prepare, but here’s how I made myself ready for the defence.

1. Request peer reviews from your colleagues and virtualisation friends. Ask them to review all of your documents and materials again, especially the design.

2. Run Webex sessions with your peers to go over your 15 minute VCDX presentation. Record this, it will help you review your performance, note the duration and your tone of voice, did you project well?

3. Conduct a mock defence session with your peers. Invite them to ask as many questions that they could think of, even the obvious ones. Record this as well, note your performance, how you responded to the questions, tone of voice, setup a BS counter. Too much BS means that you don’t know your design well enough and you’ll be at risk when it comes to your real defence. Just remember to be – clear – concise – calculated.

4. Practice white boarding, you will have at least one whiteboard at your defence and it’s your most powerful tool, so learn to use it like it’s second nature.

5. Know your design inside out, not just the technical aspects. If you can justify the technical design decisions back to the business and technical requirements and constraints then you’re on the right track.

6. If you feel that you’re not ready or you can’t make it to your defence, you can postpone it to the next defence dates without submitting your application again. I was initially scheduled to defend in Singapore but could not travel so defended in Frankfurt instead.

Well that’s my advice, I hope this information is useful and that it helps more people being able to attain the VCDX certification. Who knows I might see you on the other side of the table in 12 month’s time. 😀