Home
Microsoft

Friday, 1 October 2021

Step-by-Step-NSX-T 3.1 design and Install-P2

In my previous post Home Lab Step-by-Step-NSX-T 3.1 design and Install-P1, we completed the installation of NSX-T managers and added compute manager. In this post we will start discussing the different deployment options and how to ascertain which deployment model is best suited for our customers.

But, before jumping on to deployment options, I would like to talk about architecting/designing a solution. This is for the folks who are new to solution architecture field. To summarize, a solution is build up of  5 components. Requirements, Risks, Constraints, Assumptions and based on these, design decisions.

Like every business starts with an idea, similarly every solution starts from a requirement. Now requirements are further divided into 2 categories, business and technical. A good architect is able to differentiate between these two and makes his decisions thru entire solution keeping them in mind.

What could be a business requirement, well "Company A wants to reduce their IT expense" this is a business requirement. Where in "Company A's CTO wants solution to be highly available" is a good example of Technical requirement.

Requirements are the main driving force of the design, but constraints are the variable of the design which cant be changed. "Company A wants to utilize the hardware they bought for another project" this is a constraint.

Now, while designing solution you would find at certain areas where you cant apply the best practice due to the fact you have a constraint of specific requirement and which puts our design at risk, that should be well documented. For example, Company A only bought 2 servers for their management cluster which do not allow you to run vCenter in high availability cluster mode. Which poses a risk of "vCenter unavailability in case of host failure", with each risk you need to plan for its mitigation. In our example you cant run vCenter in VCHA and you cant completely mitigate this risk, but can provide high availability using vSphere HA.

Finally Assumptions, as designs is a canvas which is only showing the blue print of the actual project, you would make certain assumptions like "NTP server will be available for time sync".

Now, when you design a solution you make design decisions, VMware did a very good job in making VMware Validated Designs which I highlighted in my previous post, and that contains all general design decisions which you need to make while you work on the solution, but be aware that those not the final list, as each design will have certain decisions which are not covered in VVD, and as an architect you need to figure out what those decisions are.

Each decision should be backed by either requirement, constraint, assumption or risk mitigation action. This was a brief introduction to solution designing and by following this practice day and night you can prepare for VCDX certification. Now we will move forward with the NSX-T 3.1 design options.

Its a very good practice to check the release notes of the product you are going to design solution with.

Most common scenario one can get is single site, but don't think its going to be an easy ride. Its just one portion, now in single site you can have multiple vCenters, multiple clusters in each vCenters, consolidated clusters for management workloads and workloads workloads, etc or may be you have segregation based on environment (Test, Pre-prod and prod). Depending on customers sector such as defense, BFSI, oil and gas etc you might have restrictions between environments within site. So it all comes down to the same thing you can have endless possibilities and as an architect you need to design based on all the factors you have in your environment.

In this post I am going to cover Single site, one vCenter, Consolidated/Collapsed cluster design. I would try to cover different scenarios going forward in this series. In this design we are using 4 physical uplinks.

2 Uplinks we are using with VDS which is going to host our management, vMotion and vSAN traffic.

Rest 2 Uplinks we are going to use with VDS for NSX-T.

What are logical components in a NSX-T design.

  • vCenter Server
  • NSX Managers cluster
  • Edge VMs
  • Workload VMs
  • ESXi Hosts

As we have already deployed our NSX-T manager and added compute manager. And we know how to add additional NSX-T managers to complete controllers cluster. Its time we configure our NSX-T SDDC for overlay networking.

List of pre-requisites we need to complete before we start configuration.

  • IP pool for Host TEP addresses
  • IP pools for edge node addresses
  • Transport zones ( Overlay, Host vlan and edge vlan)
  • Name of distributed switch we will use with NSX
  • Name of uplink profile.
  • Name for Host transport node profile

Lets start configuration of the NSX-T consolidated design datacenter.

Login to vCenter and NSX-T manager. Just be aware I have removed additional NSX-T managers which I deployed in my previous post to save physical resources, however in production environment never use single nsx t manager.

Once you login, you will be presented with home screen on NSX, and vcenter.



First step is to configure IP pools for host tep and edge tep. for that please navigate to NSX Manager UI and select networking tab, on the left hand menu you will find IP Address Pool click on that and select add ip address pool.

Fill the name and description field, finally select set under subnet.

You will be presented with set subnet wizard, kindly add subnet details there.


Populate information of the tep subnet which we finalized in our post homelab part 3 networking.


Follow same steps for creating an IP pool for edge tep. Once tep pools are ready we will move on to creating transport zones. Remember transport zones are the boundary, hence we need to define correct transport zones. We will create one vlan transport zone for hosts and one for edge nodes and both of them will be part of same overlay transport zone.

To accomplish that we will navigate to system>>Fabric>>Transport zones.

You would see there are 2 default TZ are present one vlan and another one is overlay, however we will create our own TZ. Click on add zone.

Name transport zone and select the traffic type.

Same ways create two more transport zones, one for hosts and one for edges, but here traffic type will be vlan. Your configuration should look like this.
Now its time we prepare one VDS to use with NSX-T. 

Navigate to vCenter>>networking tab>>right click on DC and select new distributed switch.

Now follow the wizard for VDS, name it.

I am selecting version 7.0.0 as my hosts are not updated with latest ESXi Image. You need to select the version which is compatible with your hosts esxi version.
Select number of uplinks for this VDS, and I unchecked default port group box, as I do not wish to create a port group.


Finish the wizard to create VDS. Once VDS is ready we need to make sure we change the default MTU to 9000 as we are going to use this VDS with NSX-T.

Right click on the newly created VDS select settings and choose edit settings.
Now change default value of 1500 to 9000.

Now we will add our esxi hosts to distributed switch. For that we will again right click on the new VDS and choose add and manage hosts.

You will be presented with the wizard, choose add hosts.
Click on new hosts.
Select the hosts from the list.

Assign uplinks, here you will choose unused adaptors of your ESXi hosts. Choose unused adaptor and click on assign.

If you have multiple hosts with same unused physical adaptor, then check apply this uplink assignment to rest of the hosts checkbox.

Follow same steps for second uplink.
We are already using separate VDS for VMkernal ports hence we are not migrating any of these. Hence click next without any changes.

We are not migrating any VM networking as well.
Just finish the wizard and hosts will be added to the VDS.
Now its time we create uplink profile for ESXi Hosts. Navigate to NSX manager UI, select system tab>>Fabric>>Profiles and click add profile.

Provide a name, please be aware in NSX-T letters are case sensitive that's why I prefer using only lower case. no changes to lag and scroll down to teaming.

Change teaming policy to load balance source from failover order.

Assign uplink-1 and uplink-2 both as active uplinks, assign vlan id and unlike nsx-t 2.5.1 we do not assign MTU here as mtu is assigned on the VDS.
We will create uplink profile for edge nodes, once we have deployed edge nodes and that will be in the future post. So now we will move onto creating transport node profile (remember hosts are called transport node in NSX-T).

Navigate to system>>fabric>>profiles>>transport node profile. Click on add profile.

Provide a name, select VDS, mode standard, vCenter name, Dswitch and transport zones. then scroll down.
Select the uplink profile we have created, for tep choose IP pool and select IP pool we have created for hosts, and finally finish the uplink assignment for NSX-T.

Now its time we apply this transport node profile to our ESXi hosts.

Navigate to system>>fabric>>nodes>select vcenter under managed by and click check box next to lab-cl01 that will highlight configure NSX.

Its time we hit configure NSX button, and select the transport node profile we have created and hit apply.
This will initiate NSX configuration on the ESXi Hosts.

Wait for configuration to complete. Post completion your screen should look like this.


Now are hosts are configured for NSX-T, however we are only half way thru in our configuration.

Before we move forward, we need to make sure we configure backup of NSX-T. 

In my next post "Step-by-Step-NSX-T 3.1 design and Install-P3" I going to cover EDGE nodes configuration which is very essential for our north-south communication.

I hope I was able to add value, if your answer is yes, then don't forget to share and follow. 😊

If you want me to write on specific content or you have any feedback on this post, kindly comment below.

If you want, you can connect with me on Linkedin, and please like and subscribe my youtube channel VMwareNSXCloud for step by step technical videos.

2 comments:

  1. Replies
    1. Well its a good question, and trust me anyone who has worked with NSX-T would respond with why NVDS?

      But to help you understand, a small example is, it will reduce the requirement of physical uplinks if customers didnt want to keep management, backup etc traffic on nvds. Second vSphere admins are well versed with the knowledge of VDS management, for nvds one need to understand what is NVDS. I hope it helped you understand why VMware actaully came up with option of integration using vds instead of NVDS.

      Delete

Popular posts