//
archives

sddc

This tag is associated with 3 posts

Securing VMware Cloud on AWS remote access to your SDDC with an SSL VPN

The Use Case

What is an SSL VPN?

An SSL VPN (Secure Sockets Layer virtual private network) is a form of VPN that can be used with a standard Web browser. In contrast to the traditional Internet Protocol Security (IPsec) VPN, an SSL VPN does not require the installation of specialised client software on the end user’s computer. -www.bitpipe.com

 

Why?

  • SSL VPN is not an available feature by the Management Gateway or Compute Gateway in VMware Cloud on AWS
  • Enable client VPN connections over SSL to an SDDC in VMware Cloud on AWS for secure access to the resources
  • Avoid site-to-site VPN configurations between on-premises and the Management Gateway
  • Avoid opening vCenter to the Internet

Not all customers want to setup site-to-site VPNs using IPSEC or Route-based VPNs between their on-premises data centre to an SDDC on VMware Cloud on AWS. Using a client VPN such as an SSL VPN to enable a client-side device to setup an SSL VPN tunnel to the SDDC.

Benefits

  • Improve remote administrative security
  • Enable users to access SDDC resource including vCenter over a secure SSL VPN from anywhere with an Internet connection

Summary

This article goes through the requirements and steps needed to get OpenVPN up and running. Of course, you can use any SSL VPN software, OpenVPN is a freely available open source alternative that is quick and easy to setup and is used in this article as a working example.

Review the following basic requirements before proceeding:

  • Access to your VMware Cloud on AWS SDDC
  • Basic knowledge of Linux
  • Basic knowledge of VMware vSphere
  • Basic knowledge of firewall administration

Steps

vCenter Server

In this section you’ll deploy the OpenVPN appliance. The steps can be summarised below:

  • Download the OpenVPN appliance to the SDDC. The latest VMware version is available with this link:

https://openvpn.net/downloads/openvpn-as-latest-vmware.ova

Make a note of the IP address of the appliance, you’ll need this to NAT a public IP to this internal IP using the HTTPS service later. My appliance is using an IP of 192.168.1.201.

  • Log in as root with password of openvpnas to change a password for the openvpn user. This user is used for administering the admin web interface for OpenVPN.

VMware Cloud on AWS

In this section you’ll need to create a number of firewall rules as summarised in the tables further below.

Here’s a quick diagram to show how the components relate.

What does the workflow look like?

  1. A user connects to the SSL VPN to OpenVPN using the public IP address 3.122.197.159.
  2. HTTPS (TCP 443) is NAT’d from 3.122.197.159 to the OpenVPNAppliance with an IP of 192.168.1.201 also to the HTTPS service.
  3. OpenVPN is configured with subnets that VPN users are allowed to access. 192.168.1.0/24 and 10.71.0.0/16 are the two allowed subnets. OpenVPN configures the SSL VPN tunnel to route to these two subnets.
  4. The user can open up a browser session on his laptop and connect to vCenter server using https://10.71.224.4.

Rules Configured on Management Gateway

Rule # Rule name Source Destination Services Action
1 Allow the OpenVPN appliance to access vCenter only on port 443 OpenVPN appliance vCenter HTTPS Allow

The rule should look similar to the following.

Rules Configured on Compute Gateway

Rule # Rule name Source Destination Services Action
2 Allow port 443 access to the OpenVPN appliance Any OpenVPN appliance HTTPS Allow
3 Allow the OpenVPN-network outbound access to any destination OpenVPN-network Any Any Allow

The two rules should look similar to the following.

I won’t go into detail on how to create these rules. However, you will need to create a few User Defined Groups for some of the Source and Destination objects.

NAT Rules

Rule name Public IP Service Public Ports Internal IP Internal Ports
NAT HTTPS Public IP to OpenVPN appliance 3.122.197.159 HTTPS 443 192.168.1.201 443

You’ll need to request a new Public IP before configuring the NAT rule.

The NAT rule should look similar to the following.

OpenVPN Configuration

We need to configure OpenVPN before it will accept SSL VPN connections. Ensure you’ve gone through the initial configuration detailed in this document

https://openvpn.net/vpn-server-resources/deploying-the-access-server-appliance-on-vmware-esxi/

  • Connect to the OpenVPNAppliance VM using a web browser. The URL is for my appliance is https://192.168.1.201:943
  • Login using openvpn and use the password you set earlier.

  • Click on the Admin button

Configure Network Settings

  • Click on Network Settings and enter the public IP that was issued by VMware Cloud on AWS earlier.
  • Also, only enable the TCP daemon.

  • Leave everything else on default settings.
  • Press Save Settings at the bottom.
  • Press the Update Running Server button.

Configure Routing

  • Click on VPN Settings and enter the subnet that vCenter runs on under the Routing section. I use the Infrastructure Subnet. 10.71.0.0/16.

  • Leave all other settings default, however this depends on what you configured when you deployed the OpenVPN appliance initially. My settings are below:

  • Press Save Settings at the bottom.
  • Press the Update Running Server button.

Configure Users and Users’ access to networks

  • Click on User Permissions and add a new user
  • Click on the More Settings pencil icon and configure a password and add in the subnets that you want this user to be able to access. I am using 192.168.1.0/24 – this is the OpenVPN-network subnet and also 10.71.0.0/16 – this is the Infrastructure Subnet for vCenter, ESXi in the SDDC. This will allow clients connected through the SSL VPN to connect directly to vCenter.

If you don’t know the Infrastructure Subnet you can obtain it by going to Network & Security > Overview

  • Press Save Settings at the bottom.
  • Press the Update Running Server button.

Installing the OpenVPN SSL VPN client onto a client device

The desktop client is only required if you do not want to use the web browser to initiate the SSL VPN. Unfortunately, we need signed certificates configured on OpenVPN to use the browser. I don’t have any for this example, so we will use the desktop client to connect instead.

For this section I will use my laptop to connect to the VPN.

  • Open up a HTTPS browser session to the public IP address that was provisioned by VMware Cloud on AWS earlier. For me this is https://3.122.197.159.
  • Accept any certificates to proceed. Of course, you can use real signed certificates with your OpenVPN configuration.
  • Enter the username of the user that was created earlier, the password and select the Connect button.

  • Click on the continue link to download the SSL VPN client

  • Once downloaded, launch the installation file.
  • Once complete you can close the browser as it won’t connect automatically as we are not using signed certificates.

Connecting to the OpenVPN SSL VPN client from a client device

Now that the SSL VPN client is installed we can open an SSL VPN tunnel.

  • Launch the OpenVPNConnect client, I’m on OSX, so SPACEBAR “OpenVPNConnect” will bring up the client.
  • Once launched, you can click on the small icon at the top of your screen.

  • Connect to the public IP relevant to your OpenVPN configuration.
  • Enter the credentials then click on Connect.
  • Accept all certificate prompts and the VPN should now be connected.

Connect to vCenter

Open up a HTTPS browser session and use the internal IP address of vCenter. You may need to add a hosts file entry for the public FQDN for vCenter to redirect to the internal IP instead. That’s it! You’re now accessing vCenter over an SSL VPN.

It’s also possible to use this method to connect to other network segments. Just follow the procedures above to add additional network segments and rules in the Compute Gateway and also add additional subnets to the Access Control section when adding/editing users to OpenVPN.

Call to Action

Learn more with these resources:

Coming to VMworld US? Get yourself a VVOL compliant all software storage array

This article details all of my and Atlantis’ activities at VMworld US. Read more to get an introduction of what we will be doing and announcing and a sneak peek at our upcoming technology roadmap that solves some of the major business issues concerning performance, capacity and availability today. It is indeed going to be a VMworld with ‘no limits’ and one of the great innovations that we will be announcing is Teleport. More on this later!

Teleport your files

No limits with Teleport

I’ll be at in San Francisco from Saturday 23rd August until Thursday 28th August where I’ll be representing the USX team and looking after the Hands on Labs, running live demos and having expert one on ones at the booth. Come and visit to learn more about USX and how I can help you get more performance and capacity out of your VMware and storage infrastructure. I’d love to hear from you.

Where can you find me?

Atlantis is a Gold sponsor this year with Hands on Labs, a booth and multiple speaking sessions. Read on to find out what we’ll be announcing and where you can find my colleagues and me.

Booth in the Exhibitor Hall

I’ll mostly be located at booth 1529, you can find me and my colleagues next to the main VMware booth, just head straight up pass the HP, EMC, NetApp and Dell stands and come speak to me on how USX can help you claim more performance and capacity from these great enterprise storage arrays.

Speak to me about USX data services and I’ll show you some great live demos on how you can reclaim up to 5 times your storage capacity and gain 10 times more performance out of your VMware environment.

Here’s one showing USX as storage for vCloud Director in a Service Provider context and also for Horizon View.

https://www1.gotomeeting.com/register/626574592

If that’s not enough then come and speak to me about some of these great innovations:

  • If you’ve been waiting for a VVOL compliant all software vendor to try VVOLs with vSphere 6 beta then wait no more.
  • VMware VVOL Support – all of your storage past, present and future instantly become VVOL compliant with USX.
  • Teleport – the vMotion of the storage world which gives you the ability to move VMs, VMDKs and files between multiple data centers and the cloud in seconds to improve agility (if you’re thinking its Storage vMotion, trust me it is not).
  • And more….

Location in Solutions Exchange

Sessions

We have three breakout sessions this year, two of them with our customers UHL and Northim Bank where Dave Rose and Erick Stoeckle respectively will take you through how they use USX in production.

The other breakout session is focused on VVols, VASA, VSAN and USX Data Services and will be delivered by our CTO and Founder Chetan Venkakesh (@chetan_). If you have not had the pleasure to hear Chetan speak before, then please don’t miss this opportunity. The guy is insane and uses just one slide with one picture to explain everything to you. He is a great storyteller and you shouldn’t miss it – even if it’s just for the F bombs that he likes to drop.

Chetan will also do a repeat 20-minute condensed session in the Solutions Exchange for a brain dump of Atlantis USX Data Services. Don’t miss this! Chetan will take you through the great new technology in the Atlantis kitbag.

Session Title Speaker(s) When Where
STP3212 – Unleashing the Awesomeness of the SDDC with Atlantis USX Chetan Venkatesh – Founder and CTO, Atlantis Computing Tuesday, Aug 26, 11:20 AM – 11:40 AM Solutions Exchange Theater Booth 1901
INF2951-SPO – Unleashing SDDC Awesomeness with Atlantis USX: Building a Storage Infrastructure for Tier 1 VMs with vVOLS, VASA, VSAN and Atlantis USX Data Services Chetan Venkatesh – Founder and CTO, Atlantis Computing Wednesday, Aug 27, 12:30 PM – 1:30 PM Somewhere in the Moscone (TBC)
EUC2654 – UK Hospital Switches From Citrix XenApp to VMware Horizon Saving £2.5 Million and Improving Patient Care Dave Rose – Head of Design authority, UHL
Seth Knox – VP Products, Atlantis Computing
Wednesday, Aug 27, 1:00 PM – 2:00 PM Somewhere in the Moscone (TBC)
STO2767 – Northrim Bank and USX Erick Stoeckle , Northrim Bank
Nishi Das – Director of Product Management, ILIO USX, Atlantis Computing Inc.
Thursday, Aug 28, 1:30 PM – 2:30 PM Somewhere in the Moscone (TBC)

Hands on Labs

You can find the hands on labs in the Hands on Labs hall, I’ll also be here to support you if you’re taking this lab. The Atlantis USX HOL is titled:

HOL-PRT-1465 – Build a Software-based Storage Infrastructure for Tier 1 VM Workloads with Atlantis USX Data Services.

This HOL consists of three modules, each of which can be taken separately or one after the other.

Modules 1 and 2 are read and click modules where you will follow the instructions in the lab guide and create the USX constructs using the Atlantis USX GUI.

Module 3 however uses the Atlantis USX API browser to quickly perform the steps in Module 1 with some JSON code.

All three modules will take you approximately an hour and a half to complete.

I had an interesting time writing this lab which was a balancing exercise in working with the limited resources assigned to my Org VDC. Please provide feedback on this lab if you can, it’ll help with future versions of this HOL. Just tweet me at @hugophan. Thanks!

Note that performance will be an issue because we are using the VMworld Hands on Labs hosted on Project NEE/OneCloud. This is a vCloud Director cloud in which the ESXi servers that you will see in vCenter are actually all virtual machines. Any VMs that you run on these ESXi servers will themselves be what we call nested VMs. In some cases you could actually see 2 more or nested levels. How’s that for inception? Just be aware that the labs are for a GUI, concept and usability feel and not for performance.

If you want to see performance, come to our booth!

VMware Hands on Labs with 3 layers of nested VMs!

Hands on Labs modules

Module #

01

Module Title

Atlantis USX – Deploying together with VMware VSAN to deliver optimized local storage

Module Narrative

Using Atlantis USX, IT organizations can pool VSANs with existing shared storage, while optimizing it with Atlantis USX In-Memory storage technology to boost performance, reduce storage capacity and provide storage services such as high availability, fast cloning and unified management across all datacenter storage hardware.The student will be taken through how to build a Hybrid virtual volume that optimizes VMware VSAN allowing it to delver high performing virtual workloads from local storage.

  • Build an USX Capacity Pool using the underlying VMware VSAN datastore
  • Build an USX performance pool from local server RAM
  • Build a Hybrid USX virtual volume suitable for running SQL Server
  • Present the Atlantis USX virtual volume to ESX over NFS

Module Objectives
Development Notes

A customer has built a resilient datastore from local storage using VSAN. This is then pooled by Atlantis USX to provide the Deduplication and I/O optimization that server workloads require. A joint whitepaper of this solution has already been written here:http://blog.atlantiscomputing.com/2014/02/atlantis-ilio-usx-and-vmware-vsan-join-forces-on-software-defined-storage/
Estimated module duration: 45 minutes

 

Module #

02

Module Title

Atlantis USX – Build In Memory Storage

Module Narrative

With Atlantis USX In-Memory storage optimization, processing computationally extensive analytics becomes easier and more cost effective allowing for an increased amount of data being processed per node and reduced the time to complete these IO intensive jobs, workloads may include Hadoop, Splunk, MongoDB.During this lab the student will be taken through how to build an Atlantis USX virtual volume using local server memory.

  • Build an USX Performance Pool aggregating server RAM from a number of ESX hosts.
    • Log into the web based management interface, and connect it the vCenter hosting the ESX infrastructure
    • Export the memory from the three ESX hosts onto the network using Atlantis aggregation technology.
    • Combine the discrete RAM resource into a protected performance pool with the Pool creation wizard.
  • Build an In-Memory virtual volume suitable for running a big data application
    • Run through the Create Virtual Volume wizard selecting In-Memory and deploying the In-Memory Virtual Volume
  • Present the Virtual Volume (datastore) to ESX over NFS.
    • Add the newly created datastore into ESX.

Module Objectives
Development Notes

The use case for this lab is increasing application performance by taking advantage of the storage optimization features in Atlantis USX.Estimated module duration: 30 minutes

 

Module #

03

Module Title

Atlantis USX – Using the RESTful API to drive automation and orchestration to scale a software-based storage infrastructure

Module Narrative

Atlantis USX has a powerful set of RESTful APIs. This module will give you insight into those APIs by using them to build out a Virtual Volume. In this module you will:

  • Connect to the USX API browser and review the available APIs
  • Create a Capacity and Memory Pool with the API
  • Create a Virtual Volume with the API

Module Objectives
Development Notes

The intent of this lab is to provide an example of how to use the Atlantis USX RESTful API to deploy USX at scale.Estimated module duration: 15 minutes

Giveaways

Oculus giveaway! See the reality in software defined storage

That’s right! I’ll be giving some of these away at the booth, make sure you stop by to see the new reality in software defined storage!

You can also pick up some of the usual freebies like T-shirts, pens, notepads etc.

There are also Google Glasses, Chromecasts, quad copters and others. We’re also working on something special. Watch this space.

Live Demos at the Booth

Come and speak to me and my colleagues to learn how USX works. We will be running live demos of the following subjects:

  • USX – Storage Consolidation for any workload on any storage.
  • USX – Database Performance Acceleration.
    • Run Tier-1 workloads on commodity hardware.
    • Run Tier-1 high performance workloads on non all-flash or hybrid storage arrays.
  • USX – All Flash Hyper-Converged with SuperMicro/IBM/SANdisk.
  • USX – Teleport (think vMotion for VMs, VMDKs and files over long distances and high latency links). Come talk to me for a live demo.

Beam me up Scotty!

  • USX Tech Preview – Cloud Gateway – using USX data services with AWS S3 as primary storage.
  • USX – VDI on USX on VSAN.
  • VDI – NVIDIA 3D Graphics.

Atlantis Events

SF Giants Game

SF Giants Game, Mon, Aug 25th at 19:00. Please contact your Atlantis Representative or ping me a note if you haven’t received an invite.

USX Partner Training & Breakfast, Wed, Aug 27th at 08:00. Please contact your Atlantis Representative or ping me a note if you’re an Atlantis Partner but have not received an invite.

Let’s meet up!

If you’re at VMworld or in the SF Bay area then let’s meet up and expand our networks.

Event Date Hours Event Name Where Register
Sat, Aug 23rd 19:00 – 22:00 VMworld Community Kickoff Johnny Foley’s, 243 O’Farrell Street http://twtup.com/6878fiv3e9fjrqz
Sun, Aug 24th 13:00 – 16:00 #Opening Acts City View at Metreon https://openingacts2014.eventbrite.com/?ref=wplist
Sun, Aug 24th 15:00 – 17:00 #v0dgeball Charity Tournament SOMA Rec Center – Corner of Folsom and 6th Streets http://tweetvite.com/event/v0dgeball2
Sun, Aug 24th 16:00 – 19:00 VMworld Welcome Reception Solutions Exchange, Moscone Center n/a
Sun, Aug 24th 20:00 – 23:00 #VMunderground City View at Metreon https://vmunderground.eventbrite.com/?ref=wplist
Mon, Aug 25th 19:00 – 23:00 #vFlipCup VMworld Community TweetUp Folsom Street Foundry http://twtvite.com/vflipcup14
Tues, Aug 26th 16:30 – 18:00 Hall Crawl Solutions Exchange, Moscone Center n/a
Tues, Aug 26th 19:00 – 22:00 #VCDX, #vExpert Party E&O Restaurant & Lounge, 314 Sutter St Invite only
Tues, Aug 26th 20:00 – 23:00 #vBacon Ferry Building, 1 Sausalito http://tweetvite.com/event/vBacon2014
Wed, Aug 27th 17:00 – 19:00 VMware vCHS Tweetup 111 Minna
Wed, Aug 27th 19:00 – 22:00 VMworld Party Moscone Center n/a

Can’t meet up?

Follow me and my colleagues on Twitter for live updates during VMworld and send us messages and questions, we’d love to hear from you.

Hugo Phan @hugophan

Chetan Venkakesh @chetan_

Seth Knox @seth_knox

Mark Nijmeijer @MarkNijmeijerCA

Gregg Holzrichter @gholzrichter

Toby Colleridge @tobyjcol

Accelerating SDDC at Atlantis Computing

Sometimes an opportunity comes along that is just too damn exciting to pass.

This is a short post on my latest move to Atlantis Computing from Canopy Cloud. My new role is primarily with the USX team to help drive the adoption of USX into large Enterprises and Service Providers. USX is Atlantis Computing’s newest technology which does for server workloads what ILIO did for EUC. Quite simply my job is to make USX a success. I’ll be helping the virtualization community understand Atlantis Computing’s USX and ILIO technologies, working with customers and partners and also with our technology partners, such as VMware, NetApp, VCE, Fusion-IO and IBM. Even though USX is new, the technology is based on ILIO which has been shipping since 2009 and is powering the largest VDI deployments in the world.

Today was officially my first day and it was a pretty interesting one. It started with a customer meeting with a large bank in London and then to BriForum, both in a listening capacity but I couldn’t help myself and ended up talking  about both ILIO and USX to some techies at the bank and then some people that came to the stand at BriForum. There is definitely hot interest with using RAM to accelerate storage in both EUC and server workloads.

Atlantis Computing’s ILIO and USX technologies are truly software defined and in simple terms enables the in-line optimisation of both IOPS and capacity BEFORE the IOPS and blocks hits the underlying storage. For example the blue graph represents IOPS to the storage array for 200 VDI VMs without ILIO, the red graph represents IOPS to the same storage array with ILIO, a saving of 80%.

ILIO

In addition because storage is deduped in-line, there is also massive capacity savings on the underlying storage too. Dedupe occurs in-line, there is no requirement for dedupe to blocks written to disk as data is deduped before being written to disk, hence no overhead caused by a dedupe job on the storage processor or spindles.

In-line de-duplication is not the only capability within the Atlantis Computing technology, some of the others are:

features

 

I won’t go into each one in this post, I’ll save that for another day. I’m very excited with my new role at a new company and hope to blog a lot more often as I learn more about Atlantis Computing and of course storage virtualization and optimisation in general.

If you want to read more, some of these resources help explain the tech. Oh and we offer a completely free ILIO license for use in POCs/Lab environments, be sure to check it out!