VMware NSX Lab Environment – Part 2: Prepare Hosts and Deploy NSX Controllers

Introduction

In Part 2 of this series I will cover preparing the ESXi hosts for NSX and deploying an NSX Controller cluster.  As mentioned in the first part of this series “Part 1:  Import and Configure NSX Manager“, the NSX Manager facilitates the deployment of the Controller clusters and ESXi host preparation (among other things), so needless to say having it up and functioning is a prerequisite for this phase.

At the completion of this post, the NSX environment should be mostly configured and we will be able to start doing fun stuff like deploying logical switches, setting up distributed routing, and playing with distributed firewall rules.  I’m pretty excited. /geekout

As you may have gathered by their name, the NSX Controllers reside in the “Control plane” while the NSX Manager resides in the “Management plane”.  Services such as logical switches, distributed logical routers / firewalls are all hypervisor kernel modules and reside in the “Data plane”.  NSX Edge, a virtual appliance(s) also resides in the data plane.  The focus of this post will be the Control plane.

This diagram, courtesy of the VMware NSX 6 Design Guide, depicts the various “planes” in which NSX components reside.

Controller Deployment 2

Deploying the NSX Controllers

1. Now that NSX Manager is running and linked to your vCenter server, the next step in the process would be to enter your NSX licenses.  However, since this is a lab I am running it in evaluation mode for 60 days, I have nothing to enter here.  If you do own the product, or are lucky enough to have a license key for perpetual lab purposes, you’d just go to “Administration > Licensing > Licenses” in the vSphere Web Client and enter the appropriate license keys.

Controller Deployment 1

2. After you’re licensed (or running in eval mode), it’s time to deploy the NSX Controllers.  Under “Networking and Security”, click “Installation.

Controller Deployment 3

3. Click the green “+” sign underneath “NSX Controller nodes”…

Controller Deployment 4

…and you will be asked to enter vSphere cluster, storage, and networking info.

Controller Deployment 5

Click “Select” next to “IP Pool”.  If you’ve already created one you’d like to use, select it and then click “OK”.  Otherwise, click the green “+” and add a new IP Pool.  Once you’ve entered the details appropriate to your environment, click “OK” on the “Add IP Pool” window, select the radio button next to your new IP Pool, and then click “OK” again on the “Select IP Pool” window.

I’ve created a 10 address IP pool for the NSX Controllers to use – technically you’d only need as many IP addresses as you’d have NSX Controllers, but I get kind of OCD about that sort of thing so I’ve allocated a block of 10.  IP’s are one of the few things I have an abundance of in the lab environment.

Controller Deployment 6

4. Click “OK” on the “Add Controller” window.

** It’s worth noting – the NSX Controller password requirements are fairly strict.  My “lab password” I’ve been re-using throughout the environment did not meet the length requirement and it complained…as you see here.  I have a feeling this is one of those passwords you don’t want to lose so be sure to keep it somewhere safe.

Controller Deployment 7

Assuming everything is good with your configuration, NSX Manager should begin deploying the Controllers to your vSphere environment using the supplied credentials linking it to vCenter (as you can see in the “Initiated by” column under “Recent Tasks”).  It’s also worth noting that the NSX Controllers get a unique identification string appended to them – avoid the urge to modify this – it’s by design.  (And yes, I did just throw a screenshot of the “fat”/C# client in here…I do catch myself flipping back to it from time to time.  It’s a habit I’m working on breaking 😛 )

Controller Deployment 8

5. Once the Controller deployment has completed, it should show a “Normal” status in the vSphere Web Client window.  If that’s the case, it’s fairly safe to say subsequent Controller deployments will be successful, so you can now repeat this process 2 more times in order to meet the “3 Controller node minimum” recommendation.  NSX Controllers should be deployed in odd numbers so that a “majority vote” can occur for electing a Master controller.

**You will not be prompted to enter a Controller password again on subsequent Controller deployments – this is shared between all NSX Controllers in the Controller cluster.

Controller Deployment 9

The Controllers are clustered?  Yes, they are.

Without going in too much detail (the NSX Design Guide does a great job explaining it), the “responsibilities” of an NSX Controller get distributed among all members of the Controller cluster.  A Master Controller is responsible for determining when a Controller node has failed and where the “slices” of a particular role it held should be transitioned to.

This image, courtesy of the VMware NSX Design Guide, shows the number of nodes that can fail for a particular NSX Controller cluster count.

2015-04-27 13_25_03-NSX 6 Design Guide.pdf - Adobe Reader

6. In this step, we will create “anti-affinity” rules for the NSX Controllers to ensure no two Controllers ever reside on the same host.  This is an important step in mitigating impact to the NSX environment if an ESXi host fails.  For a lab environment it’s probably not a big deal but I felt it was important to show, as I frequently see vSphere environments with no DRS rules setup when they should probably be used for resiliency of redundant guest VM’s.  As has been commented on by others, I’m kind of surprised that the creation of the anti-affinity rule isn’t done by NSX Manager automatically when deploying two or more Controllers.  I believe other components, such as NSX Edges, do have a rule created by default…perhaps I’m mistaken and will find out shortly.

Anyhow…

In the vSphere Web Client, navigate to the “Hosts and Clusters” view, select the applicable vSphere cluster, then click the “Manage” tab.  Select “VM/Host” Rules and then click “Add”.

Controller Deployment 10

7. Give your DRS Rule a name – I usually like to get descriptive about the nature of it (i.e. add “Anti-Affinity”) in the name.  Click the “Add” button, select your NSX Controllers, and click “OK”.  Click “OK” once more on the “Add VM/Host Rule” window.

DRS Rules 2

8. Now we will prepare the ESXi hosts for NSX.  Click on the “Host Preparation” tab and then select “Install” next to the appropriate cluster(s).  When prompted, click “Yes” to continue with the install.

Controller Deployment 12

If the host preparation was successful, you should see a green checkmark underneath the “Installation Status” and “Firewall” columns.

Controller Deployment 13

9. The next step is to configure VXLAN on our NSX enabled cluster.  On the “Host Preparation” tab, under the “VXLAN” column, select “Configure”.  You will need to select a Distributed Virtual Switch for VXLAN traffic (I’ve created one dedicated for that purpose with a single vNIC uplink…hey, it’s a lab), enter the appropriate VLAN, set your MTU size (it’s not recommended to go below 1600 due to the ~50 byte VXLAN header addition) so make sure your underlying physical (or in this case, virtual) network is configured for jumbo frames.

I’m going to create an IP pool dedicated for VTEP’s so I’ve selected “New IP Pool…” from the drop down box.

Controller Deployment 14

Enter the appropriate IP Pool information here, then click “OK”.  Like the IP Pool I created for the NSX Controllers, this one has 10 IP addresses in it.  You will need a pool large enough to provide IP addresses for each VTEP (VXLAN Tunnel End Point) interface on each host in that IP space.

Controller Deployment 15

Click “OK” in the “Configure VXLAN networking” window.

At this point, we should have green checkmarks across the board on our “Host Preparation” tab.

Controller Deployment 16

** There are considerable design decisions that must be made when choosing your VMKNic Teaming Policy – the layout of your physical networking and Distributed Virtual Switch uplink configuration could dictate which options are viable.  The VMware NSX Design Guide goes over this in great detail (beginning around page 73) and is worth the read.

This table courtesy of the VMware NSX Design Guide shows the teaming and failover modes available based on the uplink type.

Controller Deployment 17

10. The next step is to create a Segment ID (which I believe is also sometimes called a VXLAN Network Identifier/VNI) Pool.  I like to think of Segment ID’s like special VLAN’s inside of the NSX environment – they are used to differentiate the various logical network segments just like VLAN’s on a physical switch logically segment the traffic.  NSX let’s you specify a range of 5,000 to 16,777,216…so roughly 16 million possibilities.  The range you specify in your Segment ID Pool will dictate the maximum amount of logical switches available to your NSX environment.

Under the “Installation” section, click the “Logical Network Preparation” tab and select the “Segment ID” section.  Click “Edit” next to “Segment IDs & Multicast Address allocation…”

Controller Deployment 18

11. Specify your Segment ID Pool range.  I chose 5000-5999, which gives me 1000 possible network segments…far greater than I’d ever need in a lab, but hey why not?

Controller Deployment 19

** I’ve not checked the option to “Enable multicast addressing” and am relying on Unicast for my BUM traffic (Broadcast, Unknown, Unicast, and Multicast).  Not to sound like a broken record, but there are considerable design decisions you’d make to determine whether or not to use Multicast, Unicast, or Hybrid modes.  Page 25 of the VMWare NSX Design Guide goes into detail about the pros and cons of each, when to use or not use, etc.  This is not something I ever had to give much thought to in my “server centric” world prior to starting down the NSX wormhole, and found it one of the harder concepts to grasp and remember when studying for the VCP-NV exam.  This is an area I am still shoring up because it’s directly related to the way NSX propagates information throughout the environment, so it’s obviously a critical piece.

12. The final piece is to configure our Transport Zone.  What is a Transport Zone you ask?  Well, per VMware, it quite literally “defines a collection of ESXi hosts that can communicate with each other across a physical network infrastructure.”  In other words, it determines which cluster(s) participate in the NSX environment.

Click on the “Transport Zones” section, then the green “+” sign.

Controller Deployment 20

Give the Transport Zone a name, select the appropriate replication mode, and the cluster(s) you wish to be included in the Transport Zone.  Click “OK”.

Controller Deployment 21

12. At this point, all the NSX Controllers and supporting configuration should be in place.  Review each tab under “Networking and Security > Installation > Logical Network Preparation” to ensure everything looks correct.

Controller Deployment 22

Controller Deployment 23

Controller Deployment 24

If so, we’re ready to do all the fun stuff you really wanted to deploy NSX for.  The next post in this series will handle the configuration of logical switching, distributed routing, and Edge Services.  A couple of the big things I’m looking to demonstrate are isolation of mock customers in a multi tenant environment and securing a VDI deployment with NSX.

Being my first go-round installing NSX, if you see any inaccuracies or a better way of doing things, please let me know.  And as always, thanks for reading!

 

Advertisements

One comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s