Sunday, April 12, 2020

Scaling out vSAN cluster Compute, File Service Shares and iSCSI Target Service.

Hello Everyone,


Today, we are going to cover up how to scale out the vSAN cluster compute, Storage, File Service Shares and iSCSI Target service.
In my case, I have a cluster of 4 hosts running on vSphere 7 and vSAN 7 and I want to add a new host (ESXi05) to my existing vSAN cluster, expand the vSAN Data store and also expand my vSAN File Services cluster.
Before going forward on this, we need to make sure that this new ESX host has same configuration like the existing ones in terms of CPU, RAM, Network and Disk.
First of all, when you add a host into the cluster you need to make sure that it should be in maintenance mode. If it is not in maintenance mode then vSAN FS will instantly try to clone a vSAN File Services agent VM (FS VM) on it and that process will fail as there’s no disk group yet.
After you added it to the cluster, you have to create the disk group first. Claim all the disks that need to be part of the disk group and create the disk group. When you have done that you can take the host out of maintenance mode. 

Let's do it quickly.
1. Go to the cluster.
2. Add the new host(ESXi05) into the cluster.

3. Add this new ESX hosts in all DvSwitch like other other ones.
Now, it is time to create the disk group for this new host.
1. Click on cluster >> click on Configure then go to vSAN section and choose Disk Management and click on "CLAIM UNUSED DISKS"

Select the disks on this new host for Cache and Capacity tier as shown below.
Click on CREATE and few tasks will initiate on vCenter for disk group creation.

Once the disk group created, vSAN datastore capacity will increase  automatically.

However, one thing you will need to do is expand the IP Pool for the vSAN FS Protocol Stack container. 
  • Go to your cluster
  • Click on vSAN / Services
  • Go to File Service and click Edit on the right
  • Go to the IP Pool page by clicking Next twice
  • Add that additional IP address and DNS Name

  • Click Next / Finish
Exit this new host from Maintenance mode now.

A task will initiate for install a new vSAN File Service Node/Agent on this new ESX host.





So, all looks good.

Now, you can increase the File share size as  the vSAN datastore has enough space available.




Since, vSAN datastore has enough free space available, you can also create/expand iSCSI LUNs if you have enabled iSCSI on vSAN cluster as well.

So, in this post, we have scale out vSAN compute (by add new host with CPU,RAM), vSAN Storage (Add new disks group) as well.

That's all. Have a great day ahead and Cheers..



Friday, April 10, 2020

Configure Native File Services on vSAN 7.0 !!

                                              Native File Services on vSAN 7.0

Hello Everyone,

Today, we are covering up about the new and exciting feature of vSAN 7.0 i.e. Native File Services over vSAN.

This is quite an interesting feature for File share platform which can leverage the benefit of HCI powered storage i.e. VMware vSAN.

vSAN File Services, fully integrated into vSAN 7, now has the ability to create file shares alongside block volumes on your vSAN data store. These file shares are accessible via NFS v3 and NFS v4.1 protocols. It is very simple to setup and use, as you would expect with vSAN. It is also fully integrated with vSAN Virtual Object Viewer and Health, with its own set of checks for both File Server Agents and the actual File Shares themselves.
Let’s begin with a look at how to deploy vSAN File Services. It really is very straight-forward, and is enabled in much the same way as previous vSAN services. We’ll begin with a 4-node vSAN 7 cluster, as shown below.




Next, in the vSphere Client, Click on Cluster and choose Manage tab in right side.
Navigate to the vSAN Services section.
Click on Services and you will see on right side that File Services is disabled as of now.



Click on Enable and a new wizard will open like below.





Before click on NEXT, here are some prerequisites to enable this File Service. we should have all details before proceed.


Checklist

The following information is needed to configure File Service.
  • Static IP address, subnet masks and gateway for file servers
  • DNS name for each IP address or allow the system to do a reverse DNS lookup.

Click NEXT and it will go for download the File Service agent appliance OVF file to create an appliance as a NFS server.

If there no option for direct internet connection and then download the VMware Virtual SAN File Services Appliance 7.0 from here (https://my.vmware.com/group/vmware/details?downloadGroup=VSAN-FILESERVICE-700&productId=974 )


How File Services is implemented on vSAN is that we implement a set of File Server Agents, managed by vSphere ESX Agent Manager. These are very lightweight, opinionated virtual appliances running Photon OS and Docker. They behave as NFS servers and provide access to the file shares.
There is Distributed File System sitting between the NFS File Shares and vSAN. This Distributed File System component is how we share the configuration state (file share names, file share redirects, etc.) across all of the File Service Agents. If any of the File Service Agents were to fail on one of the ESXi hosts in the vSAN cluster, this enables the agent to be restarted on any other host in the vSAN cluster and continue to have access to all of the metadata around its file shares, etc.



Click on Manual approach and browse the location of OVF file. it requires 6 files for this, hope you will download all these required files.

Click NEXT





 Give details for Domain. Click NEXT







Give Network details and Click NEXT




Give IP Pool details



Review your details and Click FINISH




This will lead to a number of tasks being initiated in your vSphere environment for deploy OVF template and Install agent which I’ve captured here:








Alright, when the tasks will completed successfully completed without any errors then 

you will have a new vSAN File Service Node/Agent per vSAN node (4 agents for my 4 node vSAN cluster).




And if we take a look at the vSAN File Service in vSAN Services, we now see it enabled with various additional information, most of which we provided during the setup process:





Now that we have enabled vSAN File Services, let’s go ahead and create our first file share.

A new menu item now appears in Configure  > vSAN called File Service Shares. You can see the domain (vSAN-FS) that we created in the setup, as well as the supported protocols (NFS v3 and 4.1) as well as the Primary IP. One of the File Service Agents was designated as the primary during setup, and this is the IP address used for mounting all NFS v4.1 shares. The connections are redirected internally to the actual IP address of the Agent presenting the share using NFS referral. Click on ADD to create the first share.



In this wizard, we will provide a name for the share. We will also pick a vSAN storage policy for the share. The policy can include all of the features that we associated with block storage policies for performance and availability. We also have the options of setting a warning threshold and a hard quota. Finally, you can also add labels on the file share. 



Click NEXT

Configure Net access details.

This step is to specify which networks can access the file share. You can make the share accessible from any network, or you can specify which particular networks can access the file share and what permissions they can have e.g. Read Only or Read Write. The Root squash checkbox is a security technique used heavily in NFS which ‘squashes’ the permissions of any root user who mounts and uses the file share.



Click NEXT and Review the file share and click Finish to create it.



A file share has been created and looks like as below.






So, it was a quite simply way to create the file shares with just a few clicks in the UI. And of course, all shares are deployed on the vSAN datastore, and are configured with both availability and performance through Storage Policy Based Management (SPBM).

I also want to tell you once thing as well that if you have a need to change the soft or hard quota of a file share, add some labels, or change the network access, simply select the file share, click Edit, and make your changes on the fly without any disruption to the shares.

Now that the File Share has been built, lets consume it from a Linux (Redhat 7.6) VM.

To check the mount availability, i run showmount -e command on my RHEL VM and here is the output

As we can see the file share /FS01 is on my fourth File Service Agent with the IP Address 172.16.10.13. To mount this file share as NFS v4.1 via the Primary IP and the NFSv4 referral mechanism, I need to include the root share (/vsanfs) in the mount path even though it is not shown in the showmount output. This will then refer the client request to mount the file share to the appropriate File Service Agent.



Now, Lets make a directory first on VM to mount this NFS share and then will mount this share.

I created a new directory with name /share1 and mounted the NFS share. See below.

Below mount command output shows that NFS share has been mounted successfully with NFS version 4.1.

Also, you can mount this share with NFS 3 version as well with below command.







For NFS version 3 mount option



And  if you can’t remember the correct syntax. In the vSphere UI, simply select the file share and click on ‘Copy URL‘ and then select which verison of NFS you want to mount and copy that exact URL and mount your NFS share.



So, this was really an exciting feature for NFS share purpose and we can use it for our NFS share requirements with all vSAN storage native features like Availability, Redundancy etc..

Thanks for your time and Cheers..




Monday, April 6, 2020

Overview of VTEP (VXLAN TUNNEL END POINT) in VMware NSX !!


VTEP also know as VXLAN Tunnel Endpoint  has a very significant role on NSX platform. VTEP works on VMKernel port group created on ESXi host. VXLAN is hyper-visor based kernel module and installed during ESX host preparation via NSX manager.
It handles all encapsulation and decapsulation for VXLAN network.


VTEP Table- Every VTEP on ESXi hosts reports the VNIs that they are a member of to the NSX                            Controller. NSX Controllers maintain this list and send it to all VTEPs.
                      Each VTEPs has a full inventory for VNIs in which they participate. NSX Controllers                          also use this list to select the VTEP Proxies as well.

MAC Table - VTEPs will report every known MAC address in each VNI to the NSX Controller. this                        will cut the unnecessary ARP request flooded among the individual hosts VTEPs. If a                         VTEP needs a MAC of an IP, it will request to NSX controller. If NSX Controller has                           the entry of that it will return to the VTEP and if not then NSX controller will flood this                       request to other VTEPs like a broadcast.


IP Report- Each VTEPs will send the MAC address and IP mappings details to NSX controller.

ARP Table- NSX controller uses the IP report which includes MAC address and IP mappings to                             create the ARP table.

Whenever a VM sends an ARP request, it is captured by the VTEP.
If the VTEP knows the answer it will reply to VM.
If VTEP does not know the answer then it will request from NSX controller.
If NSX controller does not know the answer, the request will broadcast to all VMs in the same VNI.













Saturday, April 4, 2020

How to install vCenter Server 7.0 !!

VMware vSphere 7.0 has been announced by VMware and available from 02 April 2020.

There are so many new features introduced on this new vCenter server version 7.0.

1. vSphere with Kubernetes
2. Improved DRS
3. Assignable Hardware
4. vSphere Life-cycle Manager.
5. Refactored vMotion.
6. Intrinsic Security
7. External PSC model deprecated.
8. vCenter Server on Windows is no more available.
9. Precision clock for PTP support and Many more.


Here is how to install VMware vCenter Server 7.0 (or VCSA 7.0)
  • First, download the VCSA installation media from your https://my.vmware.com account.
  • Next, mount the ISO on your local machine – In this example, I have mounted it to the E: drive.
  • Depending on your OS, you can launch the installer. For my Windows machine, this is simply at: E:\vcsa-ui-installer\win32\installer.exe
The installer will launch. Select Install:


Here, we don't have option for External PSC selection, as External PSC has been deprecated on this version permanently. This will deploy VCSA with Embedded PSC only.

Click NEXT


Accept EULA, Click NEXT



Fill Target vCenter or ESX hosts details where this VCSA will deploy.

Click NEXT



Give vCenter VM name and Root password.
Click NEXT


Select deployment size of the VCSA server as per your requirement. Click NEXT


Select desired Datastore where this VCSA VM will provisioned, Enable thin mode if you want. Also, if you are using vSAN then select the vSAN option. Click NEXT


Configure Network settings for VCSA VM. Select VM Network, provide  FQDN, IP address, Subnet mask, G/W, DNS server. Click NEXT



Review your all settings and if any thing needs to be changed, you can modify and Click FINISH.


Install- Stage 1, Deploy vCenter Server Progress Bar will start. It will take 15 min atleast.



Install- Stage 1, Deploy vCenter Server Completed.You can proceed further for Stage 2 setup. Click on Continue or access VAMI interface of this VCSA VM and go for Stage 2.



Now, We are in Stage 2 for Setup vCenter Server.

Click NEXT


Check and verify vCenter VM configuration and change if anything required. Also, Enable SSH and Give NTP server IPs for time sync.
Time sync is very important for successful deployment.
Click NEXT


It will save all these settings and proceed further.


Provide SSO details. Create new SSO or if you want to join the existing SSO domain then choose "Join Existing SSO domain" option.
Click NEXT



Select CEIP option or not, your choice. Click NEXT


Review your details and if all good then Click on FINISH.


A Warning will pop out and Click OK



Stage 2 is in Progress and will take around 20-30 mins.




Now, Task has been competed successfully without any issues.

we have successfully deployed the VCSA 7.0 version. It is UP and accessible.





That's all. Have a great day ahead and cheers !!

Edge node vmid not found on NSX manager

  Hello There, Recently , we faced an issue in our NSX-T envrironment running with 3.2.x version. We saw below error message while running t...