Workflow for end-to-end tenant provisioning with VMware Cloud Director

Firstly, apologies to all those who asked for the workflow at VMworld 2019 in Barcelona and also e-mailed me for a copy. It’s been hectic in my professional and personal life. I also wanted to clean up the workflows and remove any customer specific items that are not relevant to this workflow. Sorry it took so long!

If you’d like to see an explanation video of the workflows in action, please take a look at the VMworld session recording.

Credits

These vRealize Orchestrator workflows were co-created and developed by Benoit Serratrice and Henri Timmerman.

You can download a copy of the workflow using this link here.

What does it do?

Commission Customer Process

The workflow does the following:

  1. Creates an organization based on your initial organisation name as an input.
  2. Creates a vDC into this organization.
  3. Adds a gateway to the vDC.
  4. Adds an routed network with a gateway CIDR that you enter.
  5. Adds a direct external network.
  6. Converts the organization network to use distributed routing.
  7. Adds a default outbound firewall rule for the routed network.
  8. Adds a source NAT rule to allow the routed network to goto the external network.
  9. Adds a catalog.
Commission Customer vRO Workflow

It also cleans up the provisioning if there is a failure. I have also included a Decommission Customer workflow separately to enable you to quickly delete vCD objects quickly and easily. It is designed for lab environments. Bear this in mind when using it.

Other caveats: the workflows contained in this package are unsupported. I’ll help in the comments below as much as I can.

Getting Started

Import the package after downloading it from github.

The first thing you need to do is setup the global settings in the Global, Commission, storageProfiles and the other configurations. You can find these under Assets > Configurations.

You should then see the Commission Customer v5 workflow under Workflows in your vRO client, it should look something like this.

Enter a customer name and enter the gateway IP in CIDR into the form.

Press Run, then sit back and enjoy the show.

Known Issues

Commissioning a customer when there are no existing edge gateways deployed that use an external network. You see the following error in the vRO logs:

item: 'Commission Customer v5/item12', state: 'failed', business state: 'null', exception: 'TypeError: Cannot read property "ipAddress" from null (Workflow:Commission Customer v5 / get next ip (item8)#5)'

This happens because no IP addresses are in use from the external network pool. The Commission Customer workflow calculates the next IP address to assign to the edge gateway, it cannot do this if the last IP in use is null. Manually provision something that uses one IP address from the external network IP pool. Then use the Commission Customer workflow, it should now work.

Commissioning a customer workflow completes successfully, however you see the following errors:

[2020-03-22 19:30:44.596] [I] orgNetworkId: 545b5ef4-ff89-415b-b8ef-bae3559a1ac7
[2020-03-22 19:30:44.662] [I] =================================================================== Converting Org network to a distributed interface...
[2020-03-22 19:30:44.667] [I] ** API endpoint: vcloud.vmwire.com/api/admin/network/545b5ef4-ff89-415b-b8ef-bae3559a1ac7/action/convertToDistributedInterface
[2020-03-22 19:30:44.678] [I] error caught!
[2020-03-22 19:30:44.679] [I] error details: InternalError: Cannot execute the request:  (Workflow:Convert net to distributed interface / Post to vCD (item4)#21)
[2020-03-22 19:30:44.680] [I] error details: Cannot execute the request:  (Workflow:Convert net to distributed interface / Post to vCD (item4)#21)
[2020-03-22 19:30:44.728] [I] Network converted succesfully.

The workflow attempts to convert the org network from an internal interface to a distributed interface but it does not work even thought the logs says it was successful. Let me know if you are able to fix this.

VMworld 2019 Rewatch: Building a Modern Cloud Hosting Platform on VMware Cloud Foundation with VMware vCloud Director (HBI1321BE)

Rewatch my session with Onni Rautanen at VMworld EMEA 2019 where we cover the clouds that we are building together with Tieto.

Description: In this session, you will get a technical deep dive into Tieto’s next generation service provider cloud hosting platform running on VMware vCloud Director Cloud POD architecture deployed on top of VMware Cloud Foundation. Administrators and cloud engineers will learn from Tieto cloud architects about their scalable design and implementation guidance for building a modern multi-tenant hosting platform for 10,000+ VMs. Other aspects of this session will discuss the API integration of ServiceNow into the VMware cloud stack, Backup and DR, etc.

You’ll need to create a free VMworld account to access this video and many other videos that are made available during and after the VMworld events.

https://videos.vmworld.com/global/2019/videoplayer/29271

Load Balancing and Protecting Cloud Director with Avi Networks

Overview

The Avi Vantage platform is built on software-defined principles, enabling a next generation architecture to deliver the flexibility and simplicity expected by IT and lines of business. The Avi Vantage architecture separates the data and control planes to deliver application services beyond load balancing, such as application analytics, predictive autoscaling, micro-segmentation, and self-service for app owners in both on-premises or cloud environments. The platform provides a centrally managed, dynamic pool of load balancing resources on commodity x86 servers, VMs or containers, to deliver granular services close to individual applications. This allows network services to scale near infinitely without the added complexity of managing hundreds of disparate appliances.

https://avinetworks.com/docs/18.2/architectural-overview/

Avi components

Controllers – these are the management appliances that are responsible for state data, Service Engines are deployed by the controllers. The controllers run in a management network.

Service Engines – the load balancing services run in here. These generally run in a DMZ network. Service Engines can have one or more network adaptors connected to multiple networks. At least one network with routing to the controllers, and the remaining networks as data networks.

Deployment modes

Avi can be installed in a variety of deployment types. For VMware Cloud on AWS, it is not currently possible to deploy using ‘write access’ as vCenter is locked-down in VMC and it also has a different API from vSphere 6.7 vCenter Server. You’ll also find that other tools may not work with vCenter in a VMware Cloud on AWS SDDC, such as govc.

Instead Avi needs to be deployed using ‘No Access’ mode.

You can refer to this link for instructions to deploy Avi Controllers in ‘No Access’ mode.

Since it is only possible to use ‘No Access’ mode with VMC based SDDCs, its also a requirement to deploy the service engines manually. To do this follow the guide in this link, and start at the section titled Downloading Avi Service Engine on OVA.

If you’re using Avi with on-premises deployments of vCenter, then ‘Write Mode’ can be used to automate the provisioning of service engines. Refer to this link for more information on the different modes.

Deploying Avi Controller with govc

You can deploy the Avi Controller onto non VMware Cloud on AWS vCenter servers using the govc tool. Refer to this other post on how to do so. I’ve copied the JSON for the controller.ova for your convenience below.

{
    "DiskProvisioning": "flat",
    "IPAllocationPolicy": "dhcpPolicy",
    "IPProtocol": "IPv4",
    "PropertyMapping": [
        {
            "Key": "avi.mgmt-ip.CONTROLLER",
            "Value": ""
        },
        {
            "Key": "avi.mgmt-mask.CONTROLLER",
            "Value": ""
        },
        {
            "Key": "avi.default-gw.CONTROLLER",
            "Value": ""
        },
        {
            "Key": "avi.sysadmin-public-key.CONTROLLER",
            "Value": ""
        }
    ],
    "NetworkMapping": [
        {
            "Name": "Management",
            "Network": ""
        }
    ],
    "MarkAsTemplate": false,
    "PowerOn": false,
    "InjectOvfEnv": false,
    "WaitForIP": false,
    "Name": null
}

Architecture

For a high-level architecture overview, this link provides a great starting point.

Figure 1. Avi architecture

Service Engine Typical Deployment Architecture

Generally in legacy deployments, where BGP is not used. The service engines would tend to have three network interfaces. These are typically used for frontend, backend and management networks. This is typical of traditional deployments with F5 LTM for example.

For our example here, I will use three networks for the SEs as laid out below.

Network nameGateway CIDRPurpose
sddc-cgw-vcd-dmz1 10.104.125.1/24Management
sddc-cgw-vcd-dmz210.104.126.1/24Backend
sddc-cgw-vcd-dmz310.104.127.1/24Frontend

The service engines are configured with the following details. It is important to make a note of the MAC addresses in ‘No access’ mode as you will need this information later.

Service Engineavi-se1avi-se2
ManagementIP Address 10.104.125.11
Mac Address 00:50:56:8d:c0:2e
IP Address 10.104.125.12
Mac Address 00:50:56:8d:38:33
BackendIP Address 10.104.126.11
Mac Address 00:50:56:8d:8e:41
IP Address 10.104.126.12
Mac Address 00:50:56:8d:53:f6
FrontendIP Address 10.104.127.11
Mac Address 00:50:56:8d:89:b4
IP Address 10.104.127.12
Mac Address 00:50:56:8d:80:41

The Management network is used for communications between the SEs and the Avi controllers. For the port requirements, please refer to this link.

The Backend network is used for communications between the SEs and the application that is being load balanced and protected by Avi.

The Frontend network is used for upstream communications to the clients, in this case the northbound router or firewall towards the Internet.

Sample Application

Lets use VMware Cloud Director as the sample application for configuring Avi. vCD as it is more commonly named (to be renamed VMware Cloud Director), is a cloud platform which is deployed with an Internet facing portal. Due to this, it is always best to protect the portal from malicious attacks by employing a number of methods.

Some of these include, SSL termination and web application filtering. The following two documents explain this in more detail.

vCloud Director Security and VMware vCloud Director Security Hardening Guide.

The vCD application is configured as below:

vCD Appliance 1vCD Appliance 2
namevcd-fe1vcd-fe2
eth0 ip address10.104.123.2110.104.123.22
static route10.104.123.1 10.104.126.0/2410.104.123.1 10.104.126.0/24
eth1 ip address10.104.124.2110.104.124.22

You’ll notice that the eth0 and eth1 interfaces are connected to two different management networks 10.104.123.0/24 and 10.104.124.0/24 respectively. For vCD, it is generally good practice to separate the two interfaces into separate networks.

Network nameGateway CIDRPurpose
sddc-cgw-vcd-mgmt-110.104.123.1/24vCD Frontend
UI/API/VM Remote Console
sddc-cgw-vcd-mgmt-210.104.124.1/24vCD Backend
PostgreSQL, SSH etc.

For simplicity, I also deployed my Avi controllers onto the sddc-cgw-vcd-mgmt-2 network.

The diagram below summarises the above architecture for the HTTP interface for vCD. For this guide, I’ve used VMware Cloud on AWS together with Avi Networks to protect vCD running as an appliance inside the SDDC. This is not a typical deployment model as Cloud Director Service will be able to use VMware Cloud on AWS SDDC resource soon, but I wanted to showcase the possibilities and constraints when using Avi with VMC based SDDCs.

Figure 2 . vCD HTTP Diagram

Configuring Avi for Cloud Director

After you have deployed the Avi Controllers and the Service Engines, there are few more steps needed before vCD is fully up and operational. The proceeding steps can be summarised as follows:

  1. Setup networking for the service engines by assigning the right IP address to the correct MAC addresses for the data networks
  2. Configure the network subnets for the service engines
  3. Configure static routes for the service engines to reach vCD
  4. Setup Legacy HA mode for the service engine group
  5. Setup the SSL certificate for the HTTP service
  6. Setup the Virtual Services for HTTP and Remote Console (VMRC)
  7. Setup the server pools
  8. Setup health monitors
  9. Setup HTTP security policies

Map Service Engine interfaces

Using the Avi Vantage Controller, navigate to Infrastructure > Service Engine, select one of the Service Engines then click on the little pencil icon. Then map the MAC addresses to the correct IP addresses.

Configure the network subnets for the service engines

Navigate to Infrastructure > Networks and create the subnets.

Configure static routes

Navigate to Infrastructure > Routing and setup any static routes. You’ll notice from figure 2 that since the service engine has three network interfaces on different networks, we need to create a static route on the interface that does not have the default gateway. This is so the service engines knows which gateway to use to route traffic for particular traffic types. In this case, the gateway for the service engine to route the HTTP and Remote Console traffic southbound to the vCD cells.

Setup Legacy HA mode for the service engine group

Navigate to Infrastructure > Service Engine Group.

Setup the HA mode to Legacy HA. This is the simplest configuration, you can use Elastic HA if you wish.

Configure the HTTP and Remote Console Virtual Services

Navigate to Applications > Virtual Services.

Creating a Virtual Service, has a few sub tasks which include the creation of the downstream server pools and SSL certificates.

Create a new Virtual Service for the HTTP service, this is for the Cloud Director UI and API. Please use this example to create another Virtual Service for the Remote Console.

For the Remote Console service, you will need to accept TCP 443 on the load balancer but connect southbound to the Cloud Director appliances on port TCP 8443. TCP 8443 is the port that VMRC uses as it shares the same IP addresses as the HTTP service.

You may notice that the screenshot is for an already configured Virtual Service for the vCD HTTP service. The server pool and SSL certificate is already configured. Below are the screenshots for those.

Certificate Management

You may already have a signed HTTP certificate that you wish to use with the load balancer for SSL termination. To do so, you will need to use the JAVA keytool to manipulate the HTTP certificate, obtaining the private key and convert from JCEKS to PCKS12. JAVA keytool is available in the vCD appliance at /opt/vmware/vcloud-director/jre/bin/.

Figure 3. SSL termination on load balancer

For detailed instructions on creating a signed certificate for vCD, please follow this guide.

Convert the keystore file certificates.ks file from JCEKS to PKCS12

keytool -importkeystore -srcstoretype JCEKS -srckeystore certificates.ks -destkeystore certificates_pkcs12.ks -deststoretype PKCS12

Export private key for the HTTP certificate from the certificates_pkcs12.ks file

keytool -importkeystore -srckeystore certificates_pkcs12.ks -srcalias http -destalias http -destkeystore httpcert.p12 -deststoretype PKCS12

Now that you have the private key for the HTTP certificate, you can go ahead and configure the HTTP certificate on the load balancer.

For the certificate file, you can either paste the text or upload the certificate file (.cer, .crt) from the certificate authority for the HTTP certificate.

For the Key (PEM) or PKCS12 file, you can use the httpcert.p12 file that you extracted from the certificates_pkcs12.ks file above.

The Key Passphrase is the password that you used to secure the httpcert.p12 file earlier.

Note that the vCD Remote Console (VMRC) must use pass-through for SSL termination, e.g., termination of the VMRC session must happen on the Cloud Director cell. Therefore, the above certificate management activities on Avi are not required for the VMRC.

Health Monitors

Navigate to Applications > Pools.

Edit the HTTP pool using the pencil icon and click on the Add Active Monitor green button.

Health monitoring of the HTTP service uses

GET /cloud/server_status HTTP/1.0

With an expected server response of

Service is up.

And a response code of 200.

The vCD Remote Console Health monitor is a lot simpler as you can see below.

Layer 7 HTTP Security

Layer 7 HTTP Security is very important and is highly recommended for any application exposed to the Internet. Layer 3 fire-walling and SSL certificates is always never enough in protecting and securing applications.

Navigate to Applications > Virtual Services.

Click on the pencil icon for the HTTP virtual service and then click on the Policies tab. Then click on the HTTP Security policy. Add a new policy with the following settings. You can read more about Layer 7 HTTP policies here.

Allowed StringsRequired by
/tenantTenant use
/loginLogin
/networkAccess to networking
/tenant-networkingAccess to networking
/cloudFor SAML/SSO logins
/transferUploads/Downloads of ISO and templates
/apiGeneral API access
/cloudapiGeneral API access
/docsSwagger API browser
Blocked Strings
/cloudapi/1.0.0/sessions/providerSpecifically block admin APIs from the Internet

This will drop all provider side services when accessed from the Internet. To access provider side services, such as /provider or admin APIs, use an internal connection to the Cloud Director cells.

Change Cloud Director public addresses

If not already done so, you should also change the public address settings in Cloud Director.

Testing the Cloud Director portal

Try to access https://vcloud.vmwire.com/provider

You won’t be able to access it as /provider is not on the list of allowed URI strings that we configured in the L7 HTTPS Security settings.

However, if you try to access https://vcloud.vmwire.com/tenant/vmwire, you will be able to reach the tenant portal for the organisation named VMwire.

Many thanks to Mikael Steding, our Avi Network Systems Engineer for helping me with setting this up.

Please reach out to me if you have any questions.

How to deploy vCloud Director Appliance with Terraform and govc

Recently I’ve been looking at a tool to automate the provisioning of the vCloud Director appliance. I wanted something that could quickly take JSON as input for the OVF properties and be able to consistently deploy the appliance with the same outcome. I tried Terraform, however that didn’t quite work out as I expected as the Terraform provider for vSphere’s vsphere_virtual_machine resource, is not able to deploy OVA or OVFs directly.

Here’s what HashiCorp has to say about that…

NOTE: Neither the vsphere_virtual_machine resource nor the vSphere provider supports importing of OVA or OVF files as this is a workflow that is fundamentally not the domain of Terraform. The supported path for deployment in Terraform is to first import the virtual machine into a template that has not been powered on, and then clone from that template. This can be accomplished with Packergovc‘s import.ovf and import.ova subcommands, or ovftool.

The way that this could be done is to first import the OVA without vApp properties, then convert it to a template, then use Terraform to create a new VM from that template and use the vapp section to customise the appliance.

vapp {
    properties = {
      "guestinfo.tf.internal.id" = "42"
    }

This didn’t work for me as not all vApp properties are implemented in the vsphere_virtual_machine resource yet. Let me know if you are able to get this to work.

So that’s where govc came in handy.

govc is a vSphere CLI built on top of govmomi.

The CLI is designed to be a user friendly CLI alternative to the GUI and well suited for automation tasks. It also acts as a test harness for the govmomi APIs and provides working examples of how to use the APIs.

Once you’ve installed govc, you can then setup the environment by entering the following examples into your shell:

export GOVC_URL="https://vcenter-onprem.vcd.lab"

export GOVC_USERNAME='administrator@vsphere.local'

export GOVC_PASSWORD='My$ecureP4ssw0rd!'

export GOVC_INSECURE=true

To deploy the appliance we will use the govc inport.ova command.

However, before you can do that, you need to obtain the JSON file that contains all the OVF properties for you to edit and then use as an input into the import.ova options with govc.

To create the JSON file run the following command

govc import.spec /path_to_vcd_appliance.ova | python -m json.tool > vcd-appliance.json

govc import.spec /volumes/STORAGE/Terraform/VMware_vCloud_Director-10.0.0.4649-15450333_OVF10.ova | python -m json.tool > vcd-appliance.json

Then edit the vcd-appliance.json file and enter the parameters for your vCD appliance. Then deploy the appliance with the govc import.ova command.

The format for this command is

govc import.ova –options=/path_to_vcd_appliance.json vcd_appliance.ova

govc import.ova -ds=NVMe --options=/Users/phanh/Downloads/terraformdir/govc/vcd-appliance.json /volumes/STORAGE/Terraform/VMware_vCloud_Director-10.0.0.4649-15450333_OVF10.ova

You should now see your vCD appliance being deployed to your vCenter server.

This method also works for any OVA/OVF deployment, including the NSX-T unified appliance, vROPs, vRO.

The next natural step would be to continue the configuration of vCloud Director with the Terraform provider for vCloud Director.

Securing VMware Cloud on AWS remote access to your SDDC with an SSL VPN

The Use Case

What is an SSL VPN?

An SSL VPN (Secure Sockets Layer virtual private network) is a form of VPN that can be used with a standard Web browser. In contrast to the traditional Internet Protocol Security (IPsec) VPN, an SSL VPN does not require the installation of specialised client software on the end user’s computer. -www.bitpipe.com

 

Why?

  • SSL VPN is not an available feature by the Management Gateway or Compute Gateway in VMware Cloud on AWS
  • Enable client VPN connections over SSL to an SDDC in VMware Cloud on AWS for secure access to the resources
  • Avoid site-to-site VPN configurations between on-premises and the Management Gateway
  • Avoid opening vCenter to the Internet

Not all customers want to setup site-to-site VPNs using IPSEC or Route-based VPNs between their on-premises data centre to an SDDC on VMware Cloud on AWS. Using a client VPN such as an SSL VPN to enable a client-side device to setup an SSL VPN tunnel to the SDDC.

Benefits

  • Improve remote administrative security
  • Enable users to access SDDC resource including vCenter over a secure SSL VPN from anywhere with an Internet connection

Summary

This article goes through the requirements and steps needed to get OpenVPN up and running. Of course, you can use any SSL VPN software, OpenVPN is a freely available open source alternative that is quick and easy to setup and is used in this article as a working example.

Review the following basic requirements before proceeding:

  • Access to your VMware Cloud on AWS SDDC
  • Basic knowledge of Linux
  • Basic knowledge of VMware vSphere
  • Basic knowledge of firewall administration

Steps

vCenter Server

In this section you’ll deploy the OpenVPN appliance. The steps can be summarised below:

  • Download the OpenVPN appliance to the SDDC. The latest VMware version is available with this link:

https://openvpn.net/downloads/openvpn-as-latest-vmware.ova

Make a note of the IP address of the appliance, you’ll need this to NAT a public IP to this internal IP using the HTTPS service later. My appliance is using an IP of 192.168.1.201.

  • Log in as root with password of openvpnas to change a password for the openvpn user. This user is used for administering the admin web interface for OpenVPN.

VMware Cloud on AWS

In this section you’ll need to create a number of firewall rules as summarised in the tables further below.

Here’s a quick diagram to show how the components relate.

What does the workflow look like?

  1. A user connects to the SSL VPN to OpenVPN using the public IP address 3.122.197.159.
  2. HTTPS (TCP 443) is NAT’d from 3.122.197.159 to the OpenVPNAppliance with an IP of 192.168.1.201 also to the HTTPS service.
  3. OpenVPN is configured with subnets that VPN users are allowed to access. 192.168.1.0/24 and 10.71.0.0/16 are the two allowed subnets. OpenVPN configures the SSL VPN tunnel to route to these two subnets.
  4. The user can open up a browser session on his laptop and connect to vCenter server using https://10.71.224.4.

Rules Configured on Management Gateway

Rule # Rule name Source Destination Services Action
1 Allow the OpenVPN appliance to access vCenter only on port 443 OpenVPN appliance vCenter HTTPS Allow

The rule should look similar to the following.

Rules Configured on Compute Gateway

Rule # Rule name Source Destination Services Action
2 Allow port 443 access to the OpenVPN appliance Any OpenVPN appliance HTTPS Allow
3 Allow the OpenVPN-network outbound access to any destination OpenVPN-network Any Any Allow

The two rules should look similar to the following.

I won’t go into detail on how to create these rules. However, you will need to create a few User Defined Groups for some of the Source and Destination objects.

NAT Rules

Rule name Public IP Service Public Ports Internal IP Internal Ports
NAT HTTPS Public IP to OpenVPN appliance 3.122.197.159 HTTPS 443 192.168.1.201 443

You’ll need to request a new Public IP before configuring the NAT rule.

The NAT rule should look similar to the following.

OpenVPN Configuration

We need to configure OpenVPN before it will accept SSL VPN connections. Ensure you’ve gone through the initial configuration detailed in this document

https://openvpn.net/vpn-server-resources/deploying-the-access-server-appliance-on-vmware-esxi/

  • Connect to the OpenVPNAppliance VM using a web browser. The URL is for my appliance is https://192.168.1.201:943
  • Login using openvpn and use the password you set earlier.

  • Click on the Admin button

Configure Network Settings

  • Click on Network Settings and enter the public IP that was issued by VMware Cloud on AWS earlier.
  • Also, only enable the TCP daemon.

  • Leave everything else on default settings.
  • Press Save Settings at the bottom.
  • Press the Update Running Server button.

Configure Routing

  • Click on VPN Settings and enter the subnet that vCenter runs on under the Routing section. I use the Infrastructure Subnet. 10.71.0.0/16.

  • Leave all other settings default, however this depends on what you configured when you deployed the OpenVPN appliance initially. My settings are below:

  • Press Save Settings at the bottom.
  • Press the Update Running Server button.

Configure Users and Users’ access to networks

  • Click on User Permissions and add a new user
  • Click on the More Settings pencil icon and configure a password and add in the subnets that you want this user to be able to access. I am using 192.168.1.0/24 – this is the OpenVPN-network subnet and also 10.71.0.0/16 – this is the Infrastructure Subnet for vCenter, ESXi in the SDDC. This will allow clients connected through the SSL VPN to connect directly to vCenter.

If you don’t know the Infrastructure Subnet you can obtain it by going to Network & Security > Overview

  • Press Save Settings at the bottom.
  • Press the Update Running Server button.

Installing the OpenVPN SSL VPN client onto a client device

The desktop client is only required if you do not want to use the web browser to initiate the SSL VPN. Unfortunately, we need signed certificates configured on OpenVPN to use the browser. I don’t have any for this example, so we will use the desktop client to connect instead.

For this section I will use my laptop to connect to the VPN.

  • Open up a HTTPS browser session to the public IP address that was provisioned by VMware Cloud on AWS earlier. For me this is https://3.122.197.159.
  • Accept any certificates to proceed. Of course, you can use real signed certificates with your OpenVPN configuration.
  • Enter the username of the user that was created earlier, the password and select the Connect button.

  • Click on the continue link to download the SSL VPN client

  • Once downloaded, launch the installation file.
  • Once complete you can close the browser as it won’t connect automatically as we are not using signed certificates.

Connecting to the OpenVPN SSL VPN client from a client device

Now that the SSL VPN client is installed we can open an SSL VPN tunnel.

  • Launch the OpenVPNConnect client, I’m on OSX, so SPACEBAR “OpenVPNConnect” will bring up the client.
  • Once launched, you can click on the small icon at the top of your screen.

  • Connect to the public IP relevant to your OpenVPN configuration.
  • Enter the credentials then click on Connect.
  • Accept all certificate prompts and the VPN should now be connected.

Connect to vCenter

Open up a HTTPS browser session and use the internal IP address of vCenter. You may need to add a hosts file entry for the public FQDN for vCenter to redirect to the internal IP instead. That’s it! You’re now accessing vCenter over an SSL VPN.

It’s also possible to use this method to connect to other network segments. Just follow the procedures above to add additional network segments and rules in the Compute Gateway and also add additional subnets to the Access Control section when adding/editing users to OpenVPN.

Call to Action

Learn more with these resources:

Using FaceTime on your Mac for Conference Calls with Webex, GoToMeeting and GlobalMeet

If like me you’re generally plugged into your laptop with a headset when working in a nice comfy place and dislike using your cellphone’s speaker and mic or apple headset for calls but instead prefer to take calls on your laptop using the Calls From iPhone feature.

1

This enables you to easily transition from what you were doing on your laptop – for example, listening to Apple Music, watching YouTube or whatever and flawlessly pick up a call or make a new call directly from your laptop. The benefits here are that you don’t need to take off your headset and continue working without switching devices or changing audio inputs for those with Bluetooth connected headsets.

But have you noticed that the FaceTime interface on OSX has no keypad? This is a problem when you need to pick up a call from the call-back function from Webex for example. Webex asks you to press ‘1’ on the keypad to be connected to the conference. Likewise, if you need to dial into a conference call with Webex, GoToMeeting or Globalmeet, you’ll need to use a keypad to enter the correct input followed generally by ‘#’ to connect. This is a little difficult if there is no keypad right?

If you tried to open up the keypad on your iPhone whilst connected to a call on your Mac, then the audio will transfer from your Mac to your iPhone and you cannot transfer it back.

2.png

Luckily there is a workaround. Well two actually, one will enable you to use the call-back functions from conference call systems and the other will enable you to dial into the meeting room directly.

When you receive a call-back call from Webex for example, and are asked to enter ‘1’ to continue, press the Mute button, then use your keyboard’s keys to provide the necessary inputs – press Mute, press 1, press #, then unmute as necessary.

3.png

The second workaround involves using direct-dial by just typing/pasting the conference number and attendee access codes directly into FaceTime before making the call. A comma ‘,’ sends a pause to the call, enabling you to enter the attendee access code and any other inputs that you need.

4.png

I find that both these work very well for me, mute works for call-back functions and direct-dial works very well when I need to join a call directly. The mute workaround is also very effective when using an IVR phone system too, think banking, customer services systems.

I hope this helps!

Atlantis USX 3.5 – What’s New?

I’m excited to announce the latest enhancements to the Atlantis USX product following the release of Atlantis USX 3.5.

Before we delve too deep in what’s new in USX 3.5, let’s take a brief recap on some of the innovative features from our previous releases.

We delivered USX 2.2 back in February 2015 where we delivered XenServer Support and LDAP authentication, USX 3.0 followed in August 2015 with support for VMware VVOLs, Volume Level Snapshot and Replication and the release of Atlantis Insight. USX 3.1 gave us deduplication aware stretched cluster and also multi-site disaster recovery in October 2015. Two-node clusters were enabled in USX 3.1.2 as well as enhancements to SnapClone for workspace in January 2016.

Some of these features were first in the industry features, for example, support for VMware VVOLs on a hyperconverged platform, all-flash hyperconverged before it became an industry standard and deduplication-aware stretched cluster using the Teleport technology that we pioneered in 2014 and released with USX 2.0.

1

Figure 1. Consistent Innovation

The feature richness and consistent innovation is something that we strive to continue to deliver with USX 3.5 coupled with additional stability and operationally ready feature set.

Let’s focus on the key focus areas with this latest release and what makes it different from the previous versions. Three main areas with the USX 3.5 enhancements are Simplify, Solidify and Optimize. These areas are targeted to provide a better user experience for both administrators and end users.

Simplify

XenServer 7 – USX 3.5 adds support for running USX on XenServer 7, in addition to vSphere 6.2.

Health Checks – We’ve added the ability to perform system health checks at any time, this is of course useful when planning for either a new installation or an upgrade of USX. Of course you can also run a health check on your USX environment at any time to make sure that everything is functioning as it should. This great feature helps identify any configuration issues prior to deployment of volumes. The tool will give pass or fail results for each of the test items, however, not all failed items prevent you from continuing your deployment, these will be flagged as a warning. For example, Internet Accessibility is not a requirement for USX, it is used to upload Insight logs or check for USX updates.

2

Figure 2. Health Checks

Operational Simplicity – enhancing operational simplicity, making things easier to do. On-demand SnapClone has been added to the USX user interface (UI), this enables the ability to create a full SnapClone – essentially a full backup of the contents of an in-memory volume to disk before any maintenance is done on that volume. This helps with maintenance of your environment where you need to quickly take a hypervisor host down for maintenance, the ability to instantly do a SnapClone through the UI makes this an easier method than in previous versions.

3

Figure 3. On-demand and scheduled SnapClones

Simple Maintenance Mode – We’ve also added the ability to perform maintenance mode for Simple Volumes. Simple Volumes can be located on local storage to present the memory from that hypervisor as a high performance in-memory volume for your virtual machine workloads such as VDI desktops. You can now enable maintenance mode using the Atlantis USX Manager UI or the REST API on simple volumes. What this does is that it will migrate the volume from one host to another, enabling you to put the source host into maintenance mode to perform any maintenance operations. This works with both VMware and Citrix hypervisors.

4

Figure 4. Simple Maintenance Mode

Solidify

Alerting is an area that has also been improved. We have added new alerts to highlight utilization of the backing disk that a volume uses. Additionally, alerts to highlight snapshot utilization is also now available. Alerts can be easily accessed using the Alerts menu in the GUI and are designed to be non-invasive however due to their nature, highly visible within the Alerts menu in the USX web UI for quick access.

Disaster Recovery for Simple Hybrid Volumes

Although this is now a new feature in USX 3.5, we’ve actually been deploying this in some of our larger customers for a few years now and the automation and workflows are now being exposed into the USX 3.5 UI. This feature enables simple hybrid volumes to be replicated by underlying storage with replication enabled technology, coupled with the automation and workflows, simple hybrid volumes can be recovered at the DR site with volume objects like the export IP addresses and volume identities being changed to suit the environment at the DR site.

Optimize

Plugin Framework is now a key feature to the USX capabilities. It is an additional framework which is integrated into the USX Web UI. It allows for the importing and running of Atlantis and community created plugins written in Python that enhance the functionality of USX. Plugins such as guest VM operations or guest VM query capabilities. These plugins enable guest-side operations such as restart of all VMs within a USX volume, or query the DNS-name of all guest VMs residing in a USX volume.

5

Figure 5. USX Plugin Framework

I hope you’ll agree that the plugin framework will provide an additional level of capabilities on top of the great capabilities we already have for automation and management such as the USX REST API and USX PowerShell Cmdlets.

Reduced Resource Requirements for volume container memory – we’ve decreased the metadata memory requirements by 40%. In previous versions the amount of memory assigned to metadata was a percentage of the volume export size before data reduction, for example, if you exported a volume of 1TB in size, the amount of memory reserved for metadata would then be 50GB, with USX 3.5 this is now reduced down to just 30GB, whilst still providing the same great performance and data reduction capabilities with fewer memory resources requirements. USX 3.5 optimizations also include the reduction of local flash storage required for the performance tier when using hybrid volumes, we’ve decreased the flash storage requirements by 95%!

In addition to reducing the metadata resource and local flash requirements, we’ve also reduced the amount of storage required for SnapClone space by 50%. This reduction reduces the SnapClone storage footprint on the underlying local or shared storage enabling you to use less storage for running USX.

6

ROBO to support vSphere Essentials.

ROBO use case is now even more cost effective with USX 3.5. This enhancement enables the use of the VMware vSphere Essentials licensing model for customers who prefer the VMware hypervisor over Citrix XenServer. This is a great option for remote and branch offices with three or less servers that wish to enable high performance, data reduction aware storage for remote sites.

Availability

Atlantis USX 3.5 is available now from the Atlantis Portal. Download now and let me know what you think of the new capabilities.

7

Release notes and online documentation are available here.

Deduplication – By the Numbers

Atlantis HyperScale appliances come with effective capacities of 12TB, 24TB and 48TB depending on the model that is deployed. These capacities are what we refer to as effective capacity, i.e., the available capacity after in-line de-duplication that occurs when data is stored onto HyperScale Volumes. HyperScale Volumes always de-duplicate data first before writing data down to the local flash drives. This is what is known as in-line deduplication which is very different from post-de-duplication which will de-duplicate data after it is written down to disk. The latter incurs storage capacity overhead as you will need the capacity to store the data before the post-process de-duplication is able to then de-duplicate. This is why HyperScale appliances only require three SSDs per node to provide the 12TB of effective capacity at 70% de-duplication.

Breaking it down

HyperScale SuperMicro CX-12
Number of nodes 4
Number of SSDs per node 3
SSD capacity 400GB
Usable flash capacity per node 1,200GB
Cluster RAW flash capacity 4,800GB
Cluster failure tolerance 1
Usable flash capacity per cluster 3,600GB
Effective capacity with 70% dedupe 12,000GB

 

Data Reduction Table

De-Dupe Rate (%)

Reduction Rate (X)

95

20.00

90

10.00

80

5.00

75

4.00

70

3.33

65

2.86

60

2.50

55

2.22

50

2.00

45

1.82

40

1.67

35

1.54

30

1.43

25

1.33

20

1.25

15

1.18

10

1.11

5

1.05

 

Formulas

Formula for calculating Reduction Rate

Taking the capacity from a typical HyperScale appliance of 3,600GB, this will give 12,000TB of effective capacity.

Summary

HyperScale provides a guarantee of 12TB per CX-12 appliance, however some workloads such as DEV/TEST private clouds and stateless VDI workloads could see as much as 90% data reduction. That’s 36,000GB of effective capacity. Do the numbers yourself, in-line de-duplication eliminates the need for lots of local flash drives or slower high capacity SAS or SATA drives. HyperScale runs the same codebase as USX and as such utilizes RAM to perform the in-line de-duplication which eliminates the need for add-in hardware cards or SSDs as staging capacity for de-duplication.

For more information please visit this site www.atlantiscomputing.com/hyperscale.

Introducing Atlantis HyperScale

What is it?

A hyper-converged appliance running pre-installed USX software on either XenServer or VMware vSphere and on the hardware of your choice – Lenovo, HP, SuperMicro and Cisco.

How is it installed?

HyperScale comes pre-installed by Atlantis Channel Partners. HyperScale runs exactly the same software as USX, however HyperScale is installed automatically from USB key by the Channel Partner. When it is delivered to your datacenter, it is a simple 5 step process to get the HyperScale appliance ready to use.

Watch the video.

Step 1

Step 2

Step 3

Step 4

Step 5

Done.

What do you get?

The appliance is ready to use in about 30 minutes with three data stores ready for use. You can of course create more volumes and also attach and optimize external storage such as NAS/SAN in addition to the local flash devices that come with the appliance.

Atlantis HyperScale Server Specifications
Server Specifications Per Node CX-12 CX-24 CX-48 (Phase 2)
Server Compute Dual Intel E5-2680 v3
Hypervisor VMware vSphere 5.5 or Citrix XenServer 6.5
Memory 256-512 GB 384-512 GB TBD
Networking 2x 10GbE & 2x 1GbE
Storage
Local Flash Storage 3x 400GB Intel 3710 SSD 3x 800GB Intel 3710 SSD TBD
Total All-Flash Effective Capacity (4 Nodes)* 12 TB 24 TB 48 TB
Summary    
Failure Tolerance 1 node failure (FTT=1)
Number of Deployed Volumes 3
IOPs per Volume More than 50,000 IOPs
Latency per Volume Less than 1ms
Throughput per Volume More than 210 MB/s

Key Differentiators vs other Hyper-converged Offerings

Apart from lower cost (another post to follow) or you can read this post from Chris Mellor from The Register, HyperScale runs on exactly the same codebase as USX. USX has advanced data services that provide very efficient data reduction and IO acceleration patented technology. For a brief overview of the Data Services please see this video.

Pricing

Sizing

Number of nodes = 4

SSDs per node = 3

SSD capacity = 400GB

Usable capacity per node = 1200GB

Usable capacity per appliance with FTT=1 = 3,600GB

Effective capacity with 70% de-duplication = 12,000GB

Summary

USX 2.1 Whats New?

USX 2.1 is now available and has some minor improvements over previous versions. There are some major milestones and some minor improvements that are part of this release.

USX Volume Dashboard
USX Volume Dashboard

Major milestones:

  1. VMware support for USX on the VMware HCL, the VMware KB is in this link.
  2. VMware support for Atlantis NAS VAAI Plugin, the VMware Compatibility Guide for USX is in this link.
    • The Atlantis NAS VAAI Plugin is now officially supported as VMwareAccepted: VIBs with this acceptance level go through verification testing, but the tests do not fully test every function of the software. The partner runs the tests and VMware verifies the result. Today, CIM providers and PSA plugins are among the VIBs published at this level. VMware directs support calls for VIBs with this acceptance level to the partner’s support organization.
    • Atlantis NAS VAAI Plugin can now be installed using VMware Update Manager.

Minor improvements:

  1. Added Incremental Backups for SnapClones for Simple Volumes (VDI use cases).
  2. Added Session Timeout – A new preference was added so that you can configure the number of minutes that a session can be idle before it is terminated.
  3. Added vCenter hierarchy view for Mount and Unmount of Volumes.
  4. Added new Volume Dashboard that shows availability & status reporting improvements, including colour codes for various conditions, all of which roll up to an redesigned volume dashboard that provides an overview of a volume’s configuration, resource use, and health
  5. Improved Status Updates.
  6. Added Active Directory and LDAP Authentication to USX Manager.
  7. Added option to have one node failure for USX clusters of up to 5 nodes. Previously this was up to 4 nodes.
  8. REST API to support changing the USX database server.

Coming to VMworld US? Get yourself a VVOL compliant all software storage array

This article details all of my and Atlantis’ activities at VMworld US. Read more to get an introduction of what we will be doing and announcing and a sneak peek at our upcoming technology roadmap that solves some of the major business issues concerning performance, capacity and availability today. It is indeed going to be a VMworld with ‘no limits’ and one of the great innovations that we will be announcing is Teleport. More on this later!

Teleport your files
No limits with Teleport

I’ll be at in San Francisco from Saturday 23rd August until Thursday 28th August where I’ll be representing the USX team and looking after the Hands on Labs, running live demos and having expert one on ones at the booth. Come and visit to learn more about USX and how I can help you get more performance and capacity out of your VMware and storage infrastructure. I’d love to hear from you.

Where can you find me?

Atlantis is a Gold sponsor this year with Hands on Labs, a booth and multiple speaking sessions. Read on to find out what we’ll be announcing and where you can find my colleagues and me.

Booth in the Exhibitor Hall

I’ll mostly be located at booth 1529, you can find me and my colleagues next to the main VMware booth, just head straight up pass the HP, EMC, NetApp and Dell stands and come speak to me on how USX can help you claim more performance and capacity from these great enterprise storage arrays.

Speak to me about USX data services and I’ll show you some great live demos on how you can reclaim up to 5 times your storage capacity and gain 10 times more performance out of your VMware environment.

Here’s one showing USX as storage for vCloud Director in a Service Provider context and also for Horizon View.

https://www1.gotomeeting.com/register/626574592

If that’s not enough then come and speak to me about some of these great innovations:

  • If you’ve been waiting for a VVOL compliant all software vendor to try VVOLs with vSphere 6 beta then wait no more.
  • VMware VVOL Support – all of your storage past, present and future instantly become VVOL compliant with USX.
  • Teleport – the vMotion of the storage world which gives you the ability to move VMs, VMDKs and files between multiple data centers and the cloud in seconds to improve agility (if you’re thinking its Storage vMotion, trust me it is not).
  • And more….

Location in Solutions Exchange

Sessions

We have three breakout sessions this year, two of them with our customers UHL and Northim Bank where Dave Rose and Erick Stoeckle respectively will take you through how they use USX in production.

The other breakout session is focused on VVols, VASA, VSAN and USX Data Services and will be delivered by our CTO and Founder Chetan Venkakesh (@chetan_). If you have not had the pleasure to hear Chetan speak before, then please don’t miss this opportunity. The guy is insane and uses just one slide with one picture to explain everything to you. He is a great storyteller and you shouldn’t miss it – even if it’s just for the F bombs that he likes to drop.

Chetan will also do a repeat 20-minute condensed session in the Solutions Exchange for a brain dump of Atlantis USX Data Services. Don’t miss this! Chetan will take you through the great new technology in the Atlantis kitbag.

Session Title Speaker(s) When Where
STP3212 – Unleashing the Awesomeness of the SDDC with Atlantis USX Chetan Venkatesh – Founder and CTO, Atlantis Computing Tuesday, Aug 26, 11:20 AM – 11:40 AM Solutions Exchange Theater Booth 1901
INF2951-SPO – Unleashing SDDC Awesomeness with Atlantis USX: Building a Storage Infrastructure for Tier 1 VMs with vVOLS, VASA, VSAN and Atlantis USX Data Services Chetan Venkatesh – Founder and CTO, Atlantis Computing Wednesday, Aug 27, 12:30 PM – 1:30 PM Somewhere in the Moscone (TBC)
EUC2654 – UK Hospital Switches From Citrix XenApp to VMware Horizon Saving £2.5 Million and Improving Patient Care Dave Rose – Head of Design authority, UHL
Seth Knox – VP Products, Atlantis Computing
Wednesday, Aug 27, 1:00 PM – 2:00 PM Somewhere in the Moscone (TBC)
STO2767 – Northrim Bank and USX Erick Stoeckle , Northrim Bank
Nishi Das – Director of Product Management, ILIO USX, Atlantis Computing Inc.
Thursday, Aug 28, 1:30 PM – 2:30 PM Somewhere in the Moscone (TBC)

Hands on Labs

You can find the hands on labs in the Hands on Labs hall, I’ll also be here to support you if you’re taking this lab. The Atlantis USX HOL is titled:

HOL-PRT-1465 – Build a Software-based Storage Infrastructure for Tier 1 VM Workloads with Atlantis USX Data Services.

This HOL consists of three modules, each of which can be taken separately or one after the other.

Modules 1 and 2 are read and click modules where you will follow the instructions in the lab guide and create the USX constructs using the Atlantis USX GUI.

Module 3 however uses the Atlantis USX API browser to quickly perform the steps in Module 1 with some JSON code.

All three modules will take you approximately an hour and a half to complete.

I had an interesting time writing this lab which was a balancing exercise in working with the limited resources assigned to my Org VDC. Please provide feedback on this lab if you can, it’ll help with future versions of this HOL. Just tweet me at @hugophan. Thanks!

Note that performance will be an issue because we are using the VMworld Hands on Labs hosted on Project NEE/OneCloud. This is a vCloud Director cloud in which the ESXi servers that you will see in vCenter are actually all virtual machines. Any VMs that you run on these ESXi servers will themselves be what we call nested VMs. In some cases you could actually see 2 more or nested levels. How’s that for inception? Just be aware that the labs are for a GUI, concept and usability feel and not for performance.

If you want to see performance, come to our booth!

VMware Hands on Labs with 3 layers of nested VMs!

Hands on Labs modules

Module #

01

Module Title

Atlantis USX – Deploying together with VMware VSAN to deliver optimized local storage

Module Narrative

Using Atlantis USX, IT organizations can pool VSANs with existing shared storage, while optimizing it with Atlantis USX In-Memory storage technology to boost performance, reduce storage capacity and provide storage services such as high availability, fast cloning and unified management across all datacenter storage hardware.The student will be taken through how to build a Hybrid virtual volume that optimizes VMware VSAN allowing it to delver high performing virtual workloads from local storage.

  • Build an USX Capacity Pool using the underlying VMware VSAN datastore
  • Build an USX performance pool from local server RAM
  • Build a Hybrid USX virtual volume suitable for running SQL Server
  • Present the Atlantis USX virtual volume to ESX over NFS

Module Objectives
Development Notes

A customer has built a resilient datastore from local storage using VSAN. This is then pooled by Atlantis USX to provide the Deduplication and I/O optimization that server workloads require. A joint whitepaper of this solution has already been written here:http://blog.atlantiscomputing.com/2014/02/atlantis-ilio-usx-and-vmware-vsan-join-forces-on-software-defined-storage/
Estimated module duration: 45 minutes

 

Module #

02

Module Title

Atlantis USX – Build In Memory Storage

Module Narrative

With Atlantis USX In-Memory storage optimization, processing computationally extensive analytics becomes easier and more cost effective allowing for an increased amount of data being processed per node and reduced the time to complete these IO intensive jobs, workloads may include Hadoop, Splunk, MongoDB.During this lab the student will be taken through how to build an Atlantis USX virtual volume using local server memory.

  • Build an USX Performance Pool aggregating server RAM from a number of ESX hosts.
    • Log into the web based management interface, and connect it the vCenter hosting the ESX infrastructure
    • Export the memory from the three ESX hosts onto the network using Atlantis aggregation technology.
    • Combine the discrete RAM resource into a protected performance pool with the Pool creation wizard.
  • Build an In-Memory virtual volume suitable for running a big data application
    • Run through the Create Virtual Volume wizard selecting In-Memory and deploying the In-Memory Virtual Volume
  • Present the Virtual Volume (datastore) to ESX over NFS.
    • Add the newly created datastore into ESX.

Module Objectives
Development Notes

The use case for this lab is increasing application performance by taking advantage of the storage optimization features in Atlantis USX.Estimated module duration: 30 minutes

 

Module #

03

Module Title

Atlantis USX – Using the RESTful API to drive automation and orchestration to scale a software-based storage infrastructure

Module Narrative

Atlantis USX has a powerful set of RESTful APIs. This module will give you insight into those APIs by using them to build out a Virtual Volume. In this module you will:

  • Connect to the USX API browser and review the available APIs
  • Create a Capacity and Memory Pool with the API
  • Create a Virtual Volume with the API

Module Objectives
Development Notes

The intent of this lab is to provide an example of how to use the Atlantis USX RESTful API to deploy USX at scale.Estimated module duration: 15 minutes

Giveaways

Oculus giveaway! See the reality in software defined storage

That’s right! I’ll be giving some of these away at the booth, make sure you stop by to see the new reality in software defined storage!

You can also pick up some of the usual freebies like T-shirts, pens, notepads etc.

There are also Google Glasses, Chromecasts, quad copters and others. We’re also working on something special. Watch this space.

Live Demos at the Booth

Come and speak to me and my colleagues to learn how USX works. We will be running live demos of the following subjects:

  • USX – Storage Consolidation for any workload on any storage.
  • USX – Database Performance Acceleration.
    • Run Tier-1 workloads on commodity hardware.
    • Run Tier-1 high performance workloads on non all-flash or hybrid storage arrays.
  • USX – All Flash Hyper-Converged with SuperMicro/IBM/SANdisk.
  • USX – Teleport (think vMotion for VMs, VMDKs and files over long distances and high latency links). Come talk to me for a live demo.

Beam me up Scotty!

  • USX Tech Preview – Cloud Gateway – using USX data services with AWS S3 as primary storage.
  • USX – VDI on USX on VSAN.
  • VDI – NVIDIA 3D Graphics.

Atlantis Events

SF Giants Game

SF Giants Game, Mon, Aug 25th at 19:00. Please contact your Atlantis Representative or ping me a note if you haven’t received an invite.

USX Partner Training & Breakfast, Wed, Aug 27th at 08:00. Please contact your Atlantis Representative or ping me a note if you’re an Atlantis Partner but have not received an invite.

Let’s meet up!

If you’re at VMworld or in the SF Bay area then let’s meet up and expand our networks.

Event Date Hours Event Name Where Register
Sat, Aug 23rd 19:00 – 22:00 VMworld Community Kickoff Johnny Foley’s, 243 O’Farrell Street http://twtup.com/6878fiv3e9fjrqz
Sun, Aug 24th 13:00 – 16:00 #Opening Acts City View at Metreon https://openingacts2014.eventbrite.com/?ref=wplist
Sun, Aug 24th 15:00 – 17:00 #v0dgeball Charity Tournament SOMA Rec Center – Corner of Folsom and 6th Streets http://tweetvite.com/event/v0dgeball2
Sun, Aug 24th 16:00 – 19:00 VMworld Welcome Reception Solutions Exchange, Moscone Center n/a
Sun, Aug 24th 20:00 – 23:00 #VMunderground City View at Metreon https://vmunderground.eventbrite.com/?ref=wplist
Mon, Aug 25th 19:00 – 23:00 #vFlipCup VMworld Community TweetUp Folsom Street Foundry http://twtvite.com/vflipcup14
Tues, Aug 26th 16:30 – 18:00 Hall Crawl Solutions Exchange, Moscone Center n/a
Tues, Aug 26th 19:00 – 22:00 #VCDX, #vExpert Party E&O Restaurant & Lounge, 314 Sutter St Invite only
Tues, Aug 26th 20:00 – 23:00 #vBacon Ferry Building, 1 Sausalito http://tweetvite.com/event/vBacon2014
Wed, Aug 27th 17:00 – 19:00 VMware vCHS Tweetup 111 Minna
Wed, Aug 27th 19:00 – 22:00 VMworld Party Moscone Center n/a

Can’t meet up?

Follow me and my colleagues on Twitter for live updates during VMworld and send us messages and questions, we’d love to hear from you.

Hugo Phan @hugophan

Chetan Venkakesh @chetan_

Seth Knox @seth_knox

Mark Nijmeijer @MarkNijmeijerCA

Gregg Holzrichter @gholzrichter

Toby Colleridge @tobyjcol

Is Atlantis USX the future of Software Defined Storage?

More informative info on #USX and a great write-up from Storage Swiss.

StorageSwiss.com - The Home of Storage Switzerland

Software Defined Storage (SDS) has certainly caught the attention of IT planners looking to reduce the cost of storage by liberating them from traditional storage hardware lock-in. As SDS evolves the promise of lower storage CAPEX, increased deployment and architecture flexibility, paired with lower OPEX through decreased complexity may emerge from suppliers of this technology. Atlantis USX looks to lead this trend, claiming to deliver all-flash array performance for half the cost of a traditional SAN.

Atlantis USX Architecture

From an architectural perspective, USX has the same roots as Atlantis’s VDI solution, except that it’s focused on virtual server workloads instead of virtual desktops. As part of the enhancements for server virtualization, USX has added the ability to pool any storage resource between servers (SAN, NAS, Flash, RAM, SAS, SATA), it’s added data protection to ensure reliability in case of a host failure and has built its own high availability…

View original post 1,335 more words

Virtual Volumes – Explained with Carousels, Horses and Unicorns – in pictures

VMwire

[Tongue in cheek. There’s no World Cup on today so I made this. Please don’t take this too seriously.]

A SAN is like a carousel

  • It provides capacity (just like a carousel) and performance (when the carousel goes around).
  • People ride on static horses bolted to the carousel and try to enjoy the ride.
  • This horse is like a LUN. The horse does not know who is riding it.
  • Everybody travels at the same speed unless you happen to sit on the outside where things go a little bit faster.
  • The speed is relative to how fast the carousel rotates and how quickly you can get to an outside seat (if you want that extra speed and wind through your hair).
  • If you want to guarantee an outside seat, you can get to the front of the queue by having a FastPass+.
  • Get a bigger motor, or increase the speed, the carousel…

View original post 168 more words

Virtual Volumes – Explained with Carousels, Horses and Unicorns – in pictures

[Tongue in cheek. There’s no World Cup on today so I made this. Please don’t take this too seriously.]

A SAN is like a carousel

  • It provides capacity (just like a carousel) and performance (when the carousel goes around).
  • People ride on static horses bolted to the carousel and try to enjoy the ride.
  • This horse is like a LUN. The horse does not know who is riding it.
  • Everybody travels at the same speed unless you happen to sit on the outside where things go a little bit faster.
  • The speed is relative to how fast the carousel rotates and how quickly you can get to an outside seat (if you want that extra speed and wind through your hair).
  • If you want to guarantee an outside seat, you can get to the front of the queue by having a FastPass+.
  • Get a bigger motor, or increase the speed, the carousel will respond to the required needs.
  • Everybody experiences the same relative performance even though they may want different things – to go faster or to go slower.
  • If the motor dies, the carousel is closed.

A Virtual Volume is like a horse

  • It has a trusting relationship with its rider and the rider with it.
  • It can roam free on green pastures and prairies.
  • It can go fast or slow.
  • It can be large or small.
  • It can go faster or slower than another horse.
  • A small horse can carry a small rider.
  • A large horse can carry a large rider.
  • A small horse could go faster than a big horse and vice versa.
  • You can go for a ride with your horse and bring another one just like it. If it gets tired, let it go and jump on the other horse.
  • It can by put out to stud to make more little foals just like it.
  • A horse can do all these things.

You get the point right? Enjoy the picture!

The Storage Unicorn
The Storage Unicorn

Atlantis USX Data Services with Hyper-Converged Architecture – Web Scale, Virtual Volumes & In-line Deduplication

VMwire

Atlantis USX has some very cool technology which I’ve had the pleasure to ‘play’ with over the past few weeks. In these series of posts I’ll attempt to cover the various technologies within the Atlantis USX stack.

The key technologies in the Atlantis USX In-Memory Data Services are:

  1. Inline IO and Data de-duplication
  2. Content aware IO processing
  3. Compression
  4. Fast Clone
  5. Storage Policies
  6. Thin Provisioning

This post focuses on Inline IO and Data de-duplication (or just dedupe for short) and Fast Clone and how these rich data services enable a hyper converged solution to outperform enterprise storage arrays.

Why would you use Atlantis USX?

The best way to approach this is to look at some use cases: Crazy as it seems, Atlantis USX delivers All-Flash Array performance but also gives five times the capacity of traditional storage arrays. Doing this with 100% software, no hardware appliances, and true software defined storage…

View original post 2,457 more words

Atlantis USX Data Services with Hyper-Converged Architecture – Web Scale, Virtual Volumes & In-line Deduplication

Atlantis USX has some very cool technology which I’ve had the pleasure to ‘play’ with over the past few weeks. In these series of posts I’ll attempt to cover the various technologies within the Atlantis USX stack.

The key technologies in the Atlantis USX In-Memory Data Services are:

  1. Inline IO and Data de-duplication
  2. Content aware IO processing
  3. Compression
  4. Fast Clone
  5. Storage Policies
  6. Thin Provisioning

This post focuses on Inline IO and Data de-duplication (or just dedupe for short) and Fast Clone and how these rich data services enable a hyper converged solution to outperform enterprise storage arrays.

 

Why would you use Atlantis USX?

The best way to approach this is to look at some use cases: Crazy as it seems, Atlantis USX delivers All-Flash Array performance but also gives five times the capacity of traditional storage arrays. Doing this with 100% software, no hardware appliances, and true software defined storage with software, enabling true web-scale architecture.

The majority of storage vendors today either do one of the other, not both. So you could end up with storage silos where IOPS are provided by an all-flash array and capacity is provided by a traditional SAN.

 

USX Use Cases

The three key Atlantis USX messages are:

  • Why buy more storage when you can do more with the storage you already have
    • Get up to 5X the capacity out of your existing storage array
    • Avoid buying any new storage hardware for the next 5 years
    • Reduce storage costs by up to 75%

     

    Use cases: Storage capacity running out in your current arrays.

    • Don’t buy another disk tray or array, free up capacity by leveraging Atlantis USX Inline Deduplication.
    • Get more capacity out of your all-flash array purchase – all-flash arrays (AFA) provide great performance but not great capacity, get 5X more capacity by using USX on-top of your AFA.

     

     

  • Accelerate the performance of your existing storage array
    • Deliver all-flash performance to applications with your existing storage at a fraction of the cost
    • Works with any storage system type – SAN, NAS, Hybrid, DAS

     

    Use cases: Current storage arrays not providing enough IOPS to your applications – place USX in front of your array and gain all-flash performance by using RAM from your hypervisor to accelerate and optimize the IO.

     

  • Build hyper-converged systems INSTANTLY without buying any new hardware
    • With RAM, local disk (SSD/SAS/SATA) or VMware VSAN on your existing servers
    • Don’t replace your servers of choice with alternative appliances
    • Use blade servers for hyper-converged infrastructure

    Use cases: Leverage existing investment in your compute estate by using USX to pool and protect local RAM and DAS to create a hyper-converged solution which can leverage both the DAS and any shared storage resources already deployed, including traditional SAN/NAS and VMware VSAN. Also use your preferred server architecture for hyper-converged, USX allows you to use both blade and rack server form factors due to the reduction in the number of disks required.

 

What if I want to do all of the above, all at the same time?

Well yes you can. And yes Duncan, we are doing this today (http://www.yellow-bricks.com/2014/05/30/looking-back-software-defined-storage/).

You can get the benefits of rich data services coupled with crazy fast storage and in-line deduplication enabling immediate capacity savings today.

 

What is Inline IO and Data de-duplication?

In short, it is the ability to dedupe data blocks and therefore IO operations before those blocks and IO operations reach the underlying storage. Atlantis USX reduces the load on the underlying storage by processing IO using the distributed in-memory technology within Atlantis USX.

To demonstrate this, the blue graph below represents IOPS provided by USX to VMs. The red graph represents the actual IOPS that USX then sends down to the underlying storage (if it needs to). [The red graph would be for IO operations that are required for unique writes, however I won’t go into detail about that here in this post.]

Conversely, the same graphs can be used to show data de-duplication, just replace the IOPS metric on the y-axis with Capacity Utilization (GB) and you will also see the same savings in the red graph. Atlantis USX uses in-memory in-line de-duplication to offload IOPS from the underlying storage and to reduce consumed capacity on the underlying storage. I’ll show you how this works in the following labs below.

 

Examples in the lab

Let’s see some of these use cases in action in the lab.

Lab setup

  • 3 x SuperMicro servers installed with vSphere 5.5 U1b with 32GB RAM, 1 x SSD, 1 x SATA and some shared storage (which is not in use in this post) presented from an all-flash array (violin memory) and SAN (Nexenta) both over iSCSI.
  • Local direct attached storage (DAS) pooled, protected and managed by Atlantis USX.

Use Case 1: Building hyper-converged using Atlantis USX for VDI

In this use case I’ve created a hyper-converged system using the three servers and pooling the local SSDs as a performance pool and the local SATA drives as a capacity pool.

Memory is not used as a performance pool due to the servers only having 32GB of RAM. In a real world deployment you can of course use RAM as the performance pool and not require any SSDs altogether. I’ll use RAM in another blog post.

In the vSphere Client, these disks are shown as local VMFS5 data stores.

Pooling Local Resources

What USX then does is pool the SSDs into a Performance Pool and the SATA disks into a Capacity Pool.

Performance Pools

Atlantis USX pools the SSDs into a Performance Pool to provide performance. Performance Pools provide redundancy and resiliency to the underlying resources. In this example, where we are only using three servers, the RAW capacity provided by the SSDs are 120 x 3 = 360, however due to the Performance Pool providing redundancy, the actual usable capacity will be 66% of this, so 240GB is usable. This is the minimum configuration for a 3-node vSphere cluster. If you had a 4-node cluster then you will have the option to deploy a Performance Pool with a ‘RAID-10’ configuration. This will then give you 480GB RAW and 240GB usable. It’s really up to you to define how local resources are protected by Atlantis USX and by adding more nodes to your vSphere cluster and/or more local resources you can create hyper-converged infrastructure which is truly web scale.

Side note 1: an aside on web scale

Atlantis USX can pool, protect and manage multiple vCenter Servers and their resources. vCenter Servers can manage thousands of vSphere ESXi hosts. You can even create a Virtual Volume from resources which span over multiple ESXi servers, which are not in the same vSphere Cluster and not managed by the same vCenter Server. Heck, you can even use USX to provide the rich data services through Virtual Volumes which use multiple vsanDatastores (VMware VSAN). What I’m trying to say is that your USX Virtual Volume is not restricted to a vCenter construct and as such is free to roam as it is in essence decoupled from any underlying hardware. More on Virtual Volumes later.

Roaming Free
Roaming Free

Back to Capacity Pools

Atlantis USX pools the SATA disks into a Capacity Pool to provide capacity. Capacity Pools also provide redundancy and resiliency to the underlying resources. In this example, where we are only using three servers, the RAW capacity provided by the SATA disks are 1000 x 3 = 3000, however due to the Capacity Pool providing redundancy, the actual usable capacity will be 66% of this, so 2000GB is usable.

The resources from the Performance Pool and Capacity Pool are then used to carve out resources to Virtual Volumes.

Side note 2: a quick introduction to Atlantis USX Virtual Volumes

The concept of a Virtual Volume is not new, it was proposed by VMware back in 2012 (http://blogs.vmware.com/vsphere/2012/10/virtual-volumes-vvols-tech-preview-with-video.html) and in more detail by Duncan here (http://www.yellow-bricks.com/2012/08/07/vmware-vstorage-apis-for-vm-and-application-granular-data-management/) but since then has not really had the engineering focus that it deserves until now (http://www.punchingclouds.com/2014/06/30/virtual-volumes-public-beta/). The concept is very straightforward – your application should not be dependent on the underlying storage for its storage needs.

Virtual Volumes is all about making the storage VM-centric – in other words making the VMDK a first class citizen in the storage world” – Cormac Hogan

Your application should be able to define its own set of requirements and then the storage will configure itself to accommodate the application. Some of these requirements could be:

  • The amount of capacity
  • The performance – IOPS and latency
  • The level of availability – backup and replication
  • The isolation level – single virtual volume container just for this application or shared between multiple applications of a similar workload

With Atlantis USX, Virtual Volumes have a storage policy which defines those exact requirements. Atlantis USX will provide the rich data services for the virtual volumes which can then be consumed by the application at the request of an Application Owner. Enabling self-service storage request and management for an application without waiting for a storage admin to calculate the RAID level and getting your LUN two weeks later. Is this still happening?

An Atlantis USX Virtual Volume is created from some memory from the hypervisor, some resource from the Performance Pool and some resource from the Capacity Pool. The Atlantis USX rich data services – inline data deduplication and content aware IO processing happens at the Virtual Volume level. The Virtual Volume is then exported by Atlantis USX as NFS or iSCSI (today. Object and CIFS very soon) either to the underlying hypervisor as a datastore or directly to the application. Think of a Virtual Volume as either a) an application container or b) a datastore – all with the storage policy characteristics as explained above and of course supporting all of the lovely vSphere, Horizon View, vCloud, VCAC features that you’ve come to love and depend on:

  • HA
  • DRS
  • vMotion
  • Fault Tolerance
  • Snapshots
  • Thin Provisioning
  • vSphere Replication
  • Storage Profiles
  • Linked Clones
  • Fast Provisioning
  • VAAI

Back to creating Virtual Volumes from Pools

In our example here, the maximum size for one Virtual Volume would be constructed from 240GB from the Performance Pool and 2000GB from the Capacity Pool. However, to take advantage of Atlantis USX in-memory I/O optimization and de-duplication, you would create multiple Virtual Volumes, one for a particular workload type. Doing so will make the most out of the Atlantis USX Content Aware IO Processing engine.

Let’s configure a single Virtual Volume for a VDI use case. I’ll create a Virtual Volume with just 100GB from the Capacity Pool and 5GB from the Performance Pool. We will then deploy some Windows 8 VMs into this Virtual Volume and see the Atlantis USX in-memory data deduplication and content aware IO processing in action.

Here’s our Virtual Volume below, configured from 100GB of resilient SATA and just 5GB of resilient SSD. Note that VAAI integration is supported and for NFS the following primitives are currently available: ‘Full File Clone’ and ‘Fast File Clone/Native Snapshot Support’.

[Dear VMware, how about a new ‘Drive Type’ label named ‘In-Memory’, ‘USX’, ‘Crazy Fast’?]

As you can see the datastore is empty. Very empty. The status graphs within USX currently show no IO offload and no deduplication. There’s nothing to dedupe and no IO to process.

 

Let’s start using this datastore by cloning a Windows 8 template into it. We will immediately see deduplication savings on the full clone after it is copied to our new virtual volume.

 

Here’s our new template, cloned from the ‘Windows 8.1 Template’ template above which is now located on the new usx-hyb-vol1 virtual volume.

 

The same graph below shows that for just that single workload, USX has been able to perform data de-duplication by 18%.

 

Let’s jump into Horizon View and create a desktop pool and use Full Clones for any new desktops, I’ll use the template named win8-template-on-usx as the base template for the new desktop pool and our new virtual volume usx-hyb-vol1 as the datastore.

 

Let’s see what happens when we deploy one new virtual machine via a full clone with Horizon View which uses an Atlantis USX Virtual Volume. Hint: The clone happens almost instantly due to the VAAI Full Clone offload to USX. We will also see the deduplication ratio increase and IO offload will also increase.

 

The Full Clone completes in about 9 seconds. Happy days!

 

The deduplication has increased to 63%! With just two VMs on this datastore – the template win8-template-on-usx and the first VM usx-vdi1.

 

Taking a look with the vSphere Client datastore browser again, we now see two VMs in the virtual volume which are both full VMs, not linked clones.

 

Two Full VMs, only occupying 8.9GB.

 

Let’s now go ahead and deploy an additional 5 VMs using Horizon View.

 

All five new VMs are provisioned pretty much instantly as shown in the vSphere Client Recent Tasks pane.

 

Checking the Atlantis USX status graphs again, the deduplication ratio has increased to 88%.

 

And we now see 6 Full Clones and the template in the datastore but still just consuming 10.57GB.

 

Additionally because the workloads are pretty much exactly the same, with all six VMs deployed and running in the usx-hyb-vol1 Virtual Volume and with Atlantis USX in-memory Content Aware IO processing, IO and data de-duplication, the IO Offload is pretty much at 100%. This will decrease accordingly as users start using the virtual desktops and more unique data is created but Atlantis USX will always try to provide all IO from the Performance Pool (RAM, Flash or SSD).

 

No storage blog post is complete without an Iometer test

Let’s do a VDI Iometer profile with 80% writes, 20% reads at 80% random with 4k blocks using the guide from Jim (http://www.jimmoyle.com/2013/08/how-to-use-iometer-to-simulate-a-desktop-workload/).

 

Here’s the result:

55k IOPS (fifty five thousand IOPS!) and pretty much negligible read and write latency on just three vSphere ESXi hosts. To put that into context, if I deployed one hundred Windows 8 VDI desktops into that Virtual Volume, each desktop (and therefore user) would basically have 550 IOPS. You can read more about IOPS per user in this post by Brian Madden (http://searchvirtualstorage.techtarget.com/video/Brian-Madden-discusses-VDI-IOPS-SSD-storageless-VDI). To put this IOPS number into further context, that Virtual Volume is configured to use just 10GB of RAM from the hypervisor, 5GB of SSD and 100GB (of which only 10.57GB is in use, which is a 88% capacity saving) of super slow SATA disks in total over the three vSphere ESXi hosts. If you want more IOPS, you just need to create more Virtual Volumes or add more ESXi hosts to scale out the hyper-converged solution.

In other words… crazy performance on hyper converged architecture with just a few off-the shelf disks on a few servers. No unicorns or magic in Atlantis USX, just pure speed and space savings. BOOM!

 

Summary

To summarize, Atlantis USX is a software-defined storage solution that delivers the performance of an All-Flash Array at half the cost of traditional SAN or NAS. You can pool any SAN, NAS or DAS storage and accelerate its performance, while at the same time consolidating storage to increase storage capacity by up to five times. With Atlantis USX, you can avoid purchasing additional storage for more than five years, meet the performance needs of any application without buying hardware, and transition from costly shared storage systems to lower cost hyper-converged systems based on direct-attached storage as I’ve demonstrated here.

In part 2. I’ll use local RAM instead of SSDs and in part 3. I’ll demonstrate how Atlantis USX can be used to get more capacity and IOPS from your current storage array.

Cannot decrypt password in sysprep after upgrading vCenter Server Appliance

A really quick post.

I’ve recently just upgraded from vCSA 5.1 to vCSA 5.5 link and found that Horizon View can no longer complete sysprep customization due to the public key being changed when you upgrade to a new appliance.

cannot decrypt password

Just edit the customization specification to fix. Hope this helps.

Accelerating SDDC at Atlantis Computing

Sometimes an opportunity comes along that is just too damn exciting to pass.

This is a short post on my latest move to Atlantis Computing from Canopy Cloud. My new role is primarily with the USX team to help drive the adoption of USX into large Enterprises and Service Providers. USX is Atlantis Computing’s newest technology which does for server workloads what ILIO did for EUC. Quite simply my job is to make USX a success. I’ll be helping the virtualization community understand Atlantis Computing’s USX and ILIO technologies, working with customers and partners and also with our technology partners, such as VMware, NetApp, VCE, Fusion-IO and IBM. Even though USX is new, the technology is based on ILIO which has been shipping since 2009 and is powering the largest VDI deployments in the world.

Today was officially my first day and it was a pretty interesting one. It started with a customer meeting with a large bank in London and then to BriForum, both in a listening capacity but I couldn’t help myself and ended up talking  about both ILIO and USX to some techies at the bank and then some people that came to the stand at BriForum. There is definitely hot interest with using RAM to accelerate storage in both EUC and server workloads.

Atlantis Computing’s ILIO and USX technologies are truly software defined and in simple terms enables the in-line optimisation of both IOPS and capacity BEFORE the IOPS and blocks hits the underlying storage. For example the blue graph represents IOPS to the storage array for 200 VDI VMs without ILIO, the red graph represents IOPS to the same storage array with ILIO, a saving of 80%.

ILIO

In addition because storage is deduped in-line, there is also massive capacity savings on the underlying storage too. Dedupe occurs in-line, there is no requirement for dedupe to blocks written to disk as data is deduped before being written to disk, hence no overhead caused by a dedupe job on the storage processor or spindles.

In-line de-duplication is not the only capability within the Atlantis Computing technology, some of the others are:

features

 

I won’t go into each one in this post, I’ll save that for another day. I’m very excited with my new role at a new company and hope to blog a lot more often as I learn more about Atlantis Computing and of course storage virtualization and optimisation in general.

If you want to read more, some of these resources help explain the tech. Oh and we offer a completely free ILIO license for use in POCs/Lab environments, be sure to check it out!

#VMwarePEX parties

Quick post to list all the parties and tweetups that are happening this week.


Day Time Venue Details
Saturday 1830 – late vBeers @ Ri Ra Irish Pub, Mandalay Bay ResortThe Shoppes at Mandalay Bay Place, 3930 Las Vegas Blvd South, Las Vegas, NV http://www.vbeers.org/2013/02/20/vbeers-las-vegas-nv-saturday-23-february-2013/

 

BYOWallet.

Sunday 2100 – late Community Tweetup @ The Burger Bar
Mandalay Place is located in the mall between Mandalay Bay & Luxor.
3930 Las Vegas Boulevard S. #121A
Las Vegas Nevada. 89119
http://tweetvite.com/event/GeeksWithoutBorders
Not sponsored by organised by @CommsNinja, @hansdeleenheer and @mjbrender

 

BYOWallet

Monday 1700 – 1900 Welcome Reception @ Solutions Exchange Kick off VMware Partner Exchange 2013 at the Welcome Reception. The Weclome Reception is a great opportunity to explore the Solutions Exchange, check out cool products and solutions, and interact with peers, partners and VMware teams. Sponsored by EMC.
Signup for #VMwareTweetup, taking place 5:30-7:30 in the Hang Space of the Solutions Exchange (same time as the Welcome Reception) to network with peers and to learn about VMware Link, the new social collaboration platform for VMware Partners! Later, you can also join the #PEXTweetup, an “unofficial” offsite sponsored tweetup for the community.
1900 – late Unofficial Tweetup @ Nine Fine Irishmen at New York, New York, 3790 S Las Vegas Blvd – Las Vegas, NV Unofficial Official Community Tweetup sponsored by HP Storage and Veeam.http://twtvite.com/CommunityAtPEX
Tuesday 1630 – 1830 Hall Crawl @ Solutions Exchange Grab a drink and discover new technologies while connecting with new partners and other attendees in the Solutions Exchange!
1730 – 1930 vExpert and VCDX Reception @ Ri Ra Irish Pub, Mandalay Bay Resort vExperts and VCDXes by invitation only.
1900 – 2200 VMware Partner Awards reception & dinner.
Breakers, South Convention Center Level 2.
Invitation only.
Wednesday 1930 – 1030 Partner Appreciation Party Join your colleagues at the Partner Appreciation Lounge in the Mandalay Ballroom! The evening will kick off with the club sounds of DJ Mike Attack and a lounge-style buffet, beer and wine. Then later, Third Eye Blind will take the stage with hits like “Jumper”, “Semi-Charmed Life”, and “Graduate”!

2012 in review

2012 summary of VMwire, not too bad although I did not blog much this year. Will try to do more in 2013. Thanks for visiting.

The WordPress.com stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:

About 55,000 tourists visit Liechtenstein every year. This blog was viewed about 250,000 times in 2012. If it were Liechtenstein, it would take about 5 years for that many people to see it. Your blog had more visits than a small country in Europe!

Click here to see the complete report.