Friday, June 30, 2023

New Enhancements to the Network and Security Fabric in VMware Cloud Foundation 5.0

 VMware Cloud Foundation is VMware’s comprehensive software-defined infrastructure (SDI) platform for deploying and managing private and hybrid clouds. As part of the newest release of VMware Cloud Foundation, we are announcing the integration of NSX 4.1.0 and its features that enhance the user and administrator experience.

NSX 4.1.0 adds a variety of new features and enhancements for virtualized networking and security which can be leveraged within a VMware Cloud Foundation 5.0 deployment.  Important updates include:



Summary of NSX 4.1.0 Highlights

  • VMware Cloud Foundation 5.0 with NSX 4.1.0 support comes with platform enhancements such as multi-tenancy for networking resources and NAPP 4.0.1.1
  • Antrea is a Kubernetes-native project that implements the CNI and Kubernetes Network Policy to provide network connectivity and security for pod workloads. NSX 4.1.0 introduces new container networking and security enhancements which allows firewall rules to be created with a mix of VMs and Kubernetes Ingress/egress objects.
  • Additional Layer 3 networking services are made available to the VMware Cloud Foundation Fabric through the deployment of inter-VRF routing.
  • Better online diagnostic system that contains debugging steps for troubleshooting specific issues.

Benefits of Leveraging NSX 4.1.0




1.) Improved Networking and Security Enhancements

VMware Container Networking with Antrea offers users signed images and binaries, along with full enterprise support for Project Antrea. VMware Container Networking integrates with managed Kubernetes services to further enhance Kubernetes network policies. It also supports Windows and Linux workloads on Kubernetes across multiple clouds.

NSX 4.1.0 introduces new container networking and security enhancements which allow firewall rules to be created with a mix of VMs and Kubernetes Ingress/egress objects. Additionally, dynamic groups can be created based on NSX tags and Kubernetes labels. This improves the usability and functionality of using NSX to manage Antrea clusters. 

Users can leverage the ability to create firewall policies that allow and/or block traffic between different Virtual Machines and Kubernetes pods in one single rule. A new enforcement point is also introduced to include all endpoints and the correct apply-to is determined based on the source and destination group member targets. 

2.) Better Defense Against Cyberattacks with NDR Functionality

As network attacks become more and more common, it becomes increasingly important to leverage the newest features in terms of security. By deploying NSX 4.1.0 as part of VMware Cloud Foundation 5.0 new Distributed Firewall capabilities together with new NDR features.

Network Detection and Response technology enables the security team to visualize attack chains by condensing massive amounts of network data into a handful of “intrusion campaigns.” Network Detection and Response achieves this visualization by aggregating and correlating security events such as detected intrusions, suspicious objects, and anomalous network flows.

3.) Improved Online Diagnostic System

Online Diagnostics provides predefined runbooks that contain debugging steps to troubleshoot a specific issue. Troubleshooting playbooks or runbooks are a series of steps or procedures that are followed to diagnose and resolve issues in a system or application. They are designed to provide a structured approach to troubleshooting and help ensure that issues are resolved quickly and effectively.

These runbooks can be invoked by API and will trigger debugging steps using the CLI, API and Scripts. Recommended actions will be provided post debugging to fix the issue and the artifacts generated related to the debugging can be downloaded for further analysis. Online Diagnostic System helps to automate debugging and simplifies troubleshooting. 


By leveraging NSX 4.1.0 as part of the VMware Cloud Foundation 5.0 release offers key updates and enhancements across network and security use cases for private, public, and multi-cloud, enabling you to continue accelerating the delivery of value to your organization.

Cheers..

For more info about NSX Network and Security, please refer VMware documentation.

Monday, June 26, 2023

VMware vSAN ESA (Express Storage Architecture)

 The announcement of the VMware vSAN Express Storage Architecture™ or ESA for short, represents a massive step forward in the capabilities of the solution, and the benefits to our customers.  This has been achieved in a way that is largely transparent to the user, in the same product they are already familiar with.

The vSAN Express Storage Architecture is a new way to process and store data.  It is an optional, alternative architecture in vSAN that is designed to achieve all-new levels of efficiency, scalability, and performance.  The ESA is optimized to exploit the full potential of the very latest in hardware and unlocks new capabilities for our customers.  It is introduced in vSAN 8, and when using ReadyNodes approved for the ESA, can be selected at the time of creating a cluster.  The ESA in vSAN 8 is an alternative to the Original Storage Architecture (OSA) found in all previous editions of vSAN, as well as vSAN 8.



The Express Storage Architecture in vSAN 8 stands on the shoulders of much of the architecture found in OSA included in previous versions of vSAN, and vSAN 8.  vSAN had already solved many of the great challenges associated with distributed storage systems, and we wanted to build off of these capabilities while looking at how best to optimize a data path to reflect the capabilities of today's hardware.

The advances in architecture primarily come from two areas, as illustrated in Figure 2.

  • A new patented log-structured file system.  This new layer in the vSAN stack - known as the vSAN LFS - allows vSAN to ingest new data fast and efficiently while preparing the data for a very efficient full stripe write.  The vSAN LFS also allows vSAN to store metadata in a highly efficient and scalable manner.
  • An optimized log-structured object manager and data structure.  This layer is a new design built around a new high-performance block engine and key value store that can deliver large write payloads while minimizing the overhead needed for metadata.  The new design was built specifically for the capabilities of the highly efficient upper layers of vSAN to send data to the devices without contention.  It is highly parallel and helps us drive near device-level performance capabilities in the ESA. 

Some of the capabilities of vSAN ESA are as follows:

  • The space efficiency of RAID-5/6 erasure coding with the performance of RAID-1 mirroring.
  • Adaptive RAID-5 erasure coding for guaranteed space savings on clusters with as few as 3 hosts.
  • Storage policy-based data compression offers up to 4x better compression ratios per 4KB data block than the vSAN OSA.
  • Encryption that secures data in-flight and at rest with minimal overhead.
  • Adaptive network traffic shaping to ensure VM performance is maintained during resynchronizations.
  • The lower total cost of ownership (TCO) by removing dedicated cache devices with the flexible architecture using a single tier in vSAN 8.
  • Simplified management, and smaller failure domains by removing the construct of disk groups.
  • New native scalable snapshots deliver extremely fast and consistent performance.  Achieve backups more quickly through snapshot consolidations that are up to 100x faster.








The vSAN Express Storage Architecture builds on much of what VMware has released that has made vSAN the number one hyperconverged solution on the market.

Sunday, June 25, 2023

VMware Validated Design for SDDC 6.2

 VMware Validated Design is a family of solutions for data center designs that span compute, storage, networking, and management, serving as a blueprint for your Software-Defined Data Center (SDDC) implementation. The documentation of VMware Validated Design consists of succeeding deliverables for all stages of the SDDC life cycle.

Introducing VMware Validated Design includes the following information:

  • Design objectives

  • Deployment flow of the SDDC management components

  • Document structure and purpose

  • SDDC high-level overview


Use VMware Validated Design to build a scalable Software-Defined Data Center that is based on VMware best practices.

VMware Validated Design has the following advantages:

One path to SDDC

After you satisfy the deployment requirements, follow one consistent path to deploy an SDDC.

VMware Validated Design provides a tested solution path with information about product versions, networking architecture, capabilities, and limitations.

SDDC design for use in production

VMware Validated Design supports an SDDC that has the following features:

  • High-availability of management components

  • Backup and restore of management components

  • Monitoring and alerting

Validated design and deployment

The prescriptive documentation of VMware Validated Design is continuously tested by VMware.

Validation provides the following advantages to your organization:

  • Validated product interoperability

  • Reduced risk of deployment and operational problems

  • Reduced test effort

Validated solution capabilities
  • Churn rate of tenant workloads

  • High availability of management components

  • Operational continuity

  • Design with dual-region support in mind

Fast SDDC standup

You can implement a data center without engaging in design work and product research. After you download all SDDC products, follow the detailed design and step-by-step instructions.

Support for latest product releases

Every version of a VMware Validated Design accommodates new product releases. If you have deployed an SDDC according to an earlier version of a VMware Validated Design, you can directly follow the validated design to upgrade your environment.


VMware Validated Design supports an SDDC architecture according to the requirements of your organization and the resource capabilities of your environment.

High-Level Logical Design of the SDDC

The SDDC according to VMware Validated Design contains the main services that are required to cover provisioning of virtualized and containerized workloads, cloud operations, and cloud automation.

Logical Design of the SDDC








According to the SDDC implementation type, a VMware Validated Design has objectives to deliver prescriptive content about an SDDC that is fast to deploy and is suitable for use in production.

VMware Validated Design Objective

Description

Main objective

SDDC capable of automated provisioning of on-premises workload, hybrid workloads, and containers.

Scope of deployment

Greenfield deployment of the management and workload domains of the SDDC, and incremental expansion of these domains as needed.

Cloud type

On-premises private cloud.

Number of regions and disaster recovery support

Single-region SDDC with multiple availability zones that you can potentially use as a best practice for a second VMware Cloud Foundation instance.

Availability zones are separate low-latency, high-bandwidth connected sites. Regions have higher latency and lower bandwidth connectivity.

The documentation provides guidance for a deployment that supports two regions for failover in the following way:

  • The design documentation provides guidance for an SDDC whose management components are designed to operate in the event of planned migration or disaster recovery.

  • The deployment documentation provides guidance for an SDDC that supports two regions for both management and tenant workloads.

Maximum number of virtual machines and churn rate

By using the SDDC Manager API in VMware Cloud Foundation, you can deploy a VMware vCenter Server™ appliance of a specified deployment and storage size. As a result, in this VMware Validated Design, you determine the maximum number of virtual machines in the SDDC according to a medium-size vCenter Server deployment specification or larger.

  • 4,000 running virtual machines per virtual infrastructure workload domain

  • 56,000 running virtual machines overall distributed across 14 virtual infrastructure workload domains

  • Churn rate of 750 virtual machines per hour

    Churn rate is related to provisioning, power cycle operations, and decommissioning of one tenant virtual machine by using a blueprint in the cloud automation platform. A churn rate of 100 means that 100 tenant workloads are provisioned, pass the power cycle operations, and are deleted.

Maximum number of containers or pods

2,000 pods per Supervisor Cluster

Number of workload domains in a region

Minimum two-domain setup, with a minimum of 4 VMware ESXi™ hosts in a domain

The validated design requires the following workload domains for SDDC deployment:

  • Management domain. Contains the appliances of the SDDC management components.

  • One or more solution-specific workload domains for Infrastructure-as-a-Service (IaaS) and containers. Up to 14 workload domains per region.
    • Contains the tenant workloads.

    • Contains the required SDDC services to enable the solution that is deployed.

See Workload Domains in VMware Validated Design.

Shared use of components for management of workload domains

This VMware Validated Design uses a dedicated NSX-T Manager cluster for each workload domain.

Data center virtualization

Maximized workload flexibility and limited dependencies on static data center infrastructure by using compute, storage, and network virtualization.

Scope of guidance

  • Clean deployment of the management domain, workload domains, and solutions working on top of the infrastructure in the domains.

  • Incremental expansion of the deployed infrastructure

    • In a single region

    • To additional availability zones

    • To additional regions

  • Deployment and initial setup of management components at the levels of virtualization infrastructure, identity and access management, cloud automation, and cloud operations.

  • Basic tenant operations such as creating a single Rainpole tenant, assigning tenant capacity, and configuring user access.

  • Operations on the management components of the SDDC such as monitoring and alerting, backup and restore, post-maintenance validation, disaster recovery, and upgrade.

Overall availability

  • 99.7% of management plane availability
  • Workload availability subject to specific availability requirements

Planned downtime is expected for upgrades, patching, and on-going maintenance.

Authentication, authorization, and access control

  • Use of Microsoft Active Directory as the identity provider.

  • Use of service accounts with least privilege role-based access control for solution integration.

Certificate signing

Certificates are signed by an external certificate authority (CA) that consists of a root and intermediate authority layers.

Hardening

Tenant workload traffic can be separated from the management traffic.


In VMware Validated Design, a workload domain represents a logical unit that groups ESXi hosts managed by a vCenter Server instance with specific characteristics according to VMware SDDC best practices.

A workload domain exists in the boundaries of an SDDC region. A region can contain one or more domains. A workload domain cannot span multiple regions.

Each domain contains the following components:

  • One VMware vCenter Server™ instance.

  • At least one vSphere cluster with vSphere HA and vSphere DRS enabled. See Cluster Types.

  • One vSphere Distributed Switch per cluster for system traffic and segments in VMware NSX-T Data Center™ for workloads.

  • One NSX-T Manager cluster for configuring and implementing software-defined networking.

  • One NSX-T Edge cluster that connects the workloads in the domain for logical switching, logical dynamic routing, and load balancing.

  • In either of the two regions in a multi-region SDDC, one NSX-T Global Manager cluster for configuring software-defined networks that span multiple regions
  • One or more shared storage allocations.

Management Domain

Contains the SDDC management components.

The management domain has the following features:

Features of the Management Domain

Feature

Description

Types of workloads

Management workloads and networking components for them.

Cluster types

Management cluster

Virtual switch type

  • vSphere Distributed Switch for system traffic and NSX-T network segments

  • NSX-T Virtual Distributed Switch (N-VDS) on the NSX-T Edge nodes

Software-defined networking

NSX -T Data Center

Shared storage type

  • VMware vSAN™ for principal storage

  • NFS for supplemental storage

Time of deployment

First domain to deploy during initial SDDC implementation

Deployment method

Deployed by VMware Cloud Builder as part of the bring-up process of VMware Cloud Foundation except for the region-specific VMware Workspace ONE® Access™ instance. You deploy the region-specific Workspace ONE Access instance manually and connect it to the NSX-T instance for the management domain.

Management Workloads for the Management Domain

Component

Cluster Location

vCenter Server

First cluster in the domain

NSX-T Manager cluster

First cluster in the domain

NSX-T Edge cluster for north-south routing, east-west routing, and load balancing

First cluster in the domain

NSX-T Global Manager cluster for global networking across multiple regions

First cluster in the domain

Region-specific Workspace ONE Access for central role-based access control

First cluster in the domain

Virtual Infrastructure Workload Domains

Contains tenant workloads that use NSX-T Data Center for logical networking. According to the requirements of your organization, you can deploy multiple virtual infrastructure (VI) workload domains in your environment.

A virtual infrastructure workload domain has the following features:

Features of a VI Workload Domain

Feature

Description

Types of workloads

Tenant workloads and networking components for them.

Cluster types

  • Shared edge and workload cluster

  • Additional workload clusters

Virtual switch type

  • vSphere Distributed Switch for system traffic from the management domain and for NSX-T network segments

  • N-VDS on the NSX-T Edge nodes in the workload domain

Software-defined networking

NSX-T Data Center

Shared storage type

vSAN, vVols, NFS, or VMFS on FC for principal storage

Time of deployment

After initial SDDC bring-up of the management domain

Deployment method

Deployed by SDDC Manager

For a multi-region SDDC, you deploy the NSX-T Global Manager cluster from an OVA file.

Management Workloads for a VI Workload Domain

Component

Deployment Location

Shared Between Workload Domains

vCenter Server

First cluster in the management domain

X

NSX-T Manager cluster

First cluster in the management domain

  • ✓ for workload domains where workloads share the same overlay transport zone cross-domain, including domains where you use vRealize Automation for workload provisioning

    Deployed with the first VI workload domain

  • X for workload domains where workloads must be connected to domain-specific transport zones

NSX-T Edge cluster for north-south and east-west routing

Shared edge and workload cluster in the workload domain

  • ✓ for workload domains where workloads share the same overlay transport zone cross-domain, including domains where you use vRealize Automation for workload provisioning

    Deployed with the first VI workload domain

  • X for workload domains where workloads must be connected to domain-specific transport zones

NSX-T Global Manager cluster for global networking across multiple regions

First cluster in the domain

vSphere with Tanzu Workload Domains

Contains containerized workloads that use vSphere with Tanzu for container provisioning and NSX-T Data Center for logical networking. According to the requirements of your organization, you can deploy multiple vSphere with Tanzu workload domains.

A vSphere with Tanzu workload domain has the following features:

Features of a vSphere with Tanzu Workload Domain

Feature

Description

Types of workloads

Containerized workloads and networking components for them.

Cluster types

  • Shared edge and workload cluster

  • Additional workload clusters

Virtual switch type

  • vSphere Distributed Switch for system traffic from the management domain and for NSX-T network segments

  • N-VDS on the NSX-T Edge nodes in the workload domain

Software-defined networking

NSX-T Data Center

Shared storage type

vSAN, vVols, NFS, or VMFS on FC for principal storage

Time of deployment

After initial SDDC bring-up of the management domain

Deployment method

You use SDDC Manager for environment validation and the vSphere Client for enabling vSphere with Tanzu

Management Workloads for a vSphere with Tanzu Workload Domain

Component

Deployment Location

Shared Between Workload Domains

vCenter Server

First cluster in the management domain

X

NSX-T Manager cluster

First cluster in the management domain

  • ✓for workload domains where workloads share the same overlay transport zone cross-domain, including domains where you use vRealize Automation for workload provisioning

    Deployed with the first vSphere with Tanzu workload domain

  • X for workload domains where workloads must be connected to domain-specific transport zones

NSX-T Edge cluster for north-south and east-west routing

Shared edge and workload cluster

  • ✓ for workload domains where workloads share the same overlay transport zone cross-domain, including domains where you use vRealize Automation for workload provisionin

    Deployed with the first vSphere with Tanzu workload domain

  • X for workload domains where workloads must be connected to domain-specific transport zones

Supervisor Cluster

Shared edge and workload cluster

X


For more details on VMware Validated Design for SDDC, refer to VMware documentation:
https://docs.vmware.com/en/VMware-Validated-Design/6.2/introducing-vmware-validated-design/GUID-5B8D0FFC-141E-43A6-BCD4-BB3966581401.html


Edge node vmid not found on NSX manager

  Hello There, Recently , we faced an issue in our NSX-T envrironment running with 3.2.x version. We saw below error message while running t...