Sam Walker

VCP-NV 6 Blueprint Section 1 Notes

Blog Post created by Sam Walker on Jul 13, 2015

Section 1 – Define VMware NSX Technology and Architecture

Objective 1.1 – Describe the Benefits of a VMware NSX Implementation

Identify challenges within a physical network interface

  • Current networking and security systems are rigid, complex and often vendor specific. 
  • Provisioning times are slow.
  • Workload placement and mobility is limited by physical topology – i.e., very rarely do customers have stretched L2 between sites, for example.
  • Always requiring dedicated hardware which creates artificial barriers, vendor lock and fragmentation.
  • VLAN and Firewall rule limits and sprawl.

Explain common VMware NSX terms

  • Consumption -  through Cloud Management Platform (CMP)
  • Management Plane – the NSX Manager
  • Control Plane – the NSX Control Cluster
  • Data Plane – The vDS (or open Virtual Switch), NSX Edge appliances, host VIB modules (VXLAN, Distributed Logical Router and Distributed Firewall)
    • VXLAN – overlay / encapsulation technology.  Works similar to VLAN in a physical environment.  VIB installed at the hypervisor level (per cluster) to enable this functionality.
    • Distributed Logical Router (DLR) – module installed within each participating host, allows routing decisions to be made at the Kernel level, so where two VMs on the same host are on separate VLANs, traffic can remain within the guest (cuts down unnecessary N/S traffic, i.e., hair-pinning). Again, a VIB.
    • Distributed Firewall – controlled within the hypervisor security; can restrict based on L2 (ARP) and L3/L4 (IP & Port)
      • This is a key concept for Micro-segmentation; avoid unnecessary N/S traffic, but keep security and management tight.
      • Rules can be applied on many elements (i.e., VM name, tag, etc).

Describe and differentiate functions and services performed by VMware NSX

1.png

 

  • Logical Firewall – Offered both through VIB Kernel module and Edge Services Gateway (ESG).  Allows scanning of traffic on a per-VMlevel. Rules are applied and moved with VM (for vMotion, for example).  As this is happening at the Kernel, it’s near line-speed and has a minimal impact on CPU.
  • Logical Loadbalancer – Through the edge services gateway, offers loadbalancing for services L4-L7. Offered in two varieties:
    • One Arm – same network as the Load balanced VMs
      • DNAT & SNAT inbound, DNAT – VIP IP to node & SNAT – client IP to LB IP
      • DNAT & SNAT outbound, DNAT – Node to LB & SNAT – LB to client
      • Two Arm – has two interfaces, sits between the load balanced VMs and the clients
        • Exposes the client’s IP addresses
        • Requires using the load balancer as the default gateway
        • DNAT inbound, VIP address to node
        • SNAT outbound, node to VIP
  • Logical VPN – Two available:
    • SSL VPN:
      • L2 & L3 services
      • Also referred to as SSL VPN Plus
      • Uses IPsec
      • L2 VPN:
        • L2 only
        • Client / Server relationship
        • Can be used to join two NSX edges
        • Other use cases = hybrid cloud / cloud bursting / service provider on-boarding, etc
  • Logical L2 switch
    • Distributed a Kernel modules, relies on overlay VXLAN switching rather than underlying VLAN network.  Runs as Kernel modules.  VXLAN to VXLAN communication happens through the hosts’ VTEP interface (VXLAN Tunnel EndPoint) which adds a 50 byte header to each packet.  Allows for up to 10000 Logical switches, numbers can range from 5000-16,000,000.
    • As host-host communication for VXLAN is across the VTEP interfaces, this is transparent to the VMs, so they appear to be on the same L2 network, despite physically being on separate L3 networks.  This allows for stretching L2 across L3. 
    • Three replication methods of communicating VXLAN information between hosts; depending on network infrastructure (whether network inf is using PIM & IGMP snooping, or whether there is a requirement to minimise utilisation on the switches, etc) will depend on which method should be chosen:
      • Multicast – Uses traditional multicast between local nodes to replicate information.  Across sites, there is a requirement to enable PIM.  This option is best to minimise network utilisation, but requires the most setup.
      • Unicast – No PIM or IGMP Snooping requirements. Per each segment, per site there will be deployed a UTEP (Unicast Tunnel EndPoint) which will unicast the replication information to each host.
      • Hybrid – Locally relies on IGMP Snooping, so multicast is contained within a site, but there is no PIM between sites.  This cuts down on local replication traffic, but still has the same penalty for cross-site replication.  Deploys an MTEP (Multicast Tunnel EndPoint) per-segment per-site, which will multi-cast to other hosts within the L2 broadcast domain and Unicast to other MTEPs on ESXi hosts on other L2 broadcast domains.
  • Logical L3 router
    • Deployed as a logical router (as distributed kernel modules with a logical router control VM) or as an ESG. 
      • Logical ***

Describe common use cases for VMware NSX

  • Data centre automation
    • Automate provisioning through the API
    • Streamline DMZ changes
  • Self-service enterprise IT
    • Isolated test, dev and prod environments on the same hardware
  • Multitenant Clouds
    • Automate provisioning
    • Maximize hardware sharing across tenants
  • DC Simplification
    • Network isolation
    • Freedom of VLAN / FWL rule sprawl

Objective 1.2 – Describe VMware NSX Architecture

Identify the components in a VMware NSX stack

  2.png

 

Identify common physical network topologies

Existing / traditional three tier networks:

Traditional distributed architecture; main core network connecting to distribution switches, then to top of tack switches. Routing higher up the stack. 


3.png

Smaller (and virtualised environments), more likely a distributed core setup:

  4.png

 

Now, moving towards Leaf and Spine architecture – more scalable, routing can happen at the leaf layer.  No VLAN trunking between leaves and spine switches, so leaf switches make a routing decision and can forward on to spine switches if necessary.

5.png

 

This is a non-blocking fabric model, best used with equal cost multipathing (ECMP).

Also known as a Clos / TRILL fabric.

 

  1. N.B., there is no connectivity leaf-to –leaf or spine-to-spine… ALL connectivity is leaf-to-spine.

 

Describe a basic VMware NSX topology

  6.png

Multiple VXLANs across the datacentre run a specific set of applications (in the above example, web, app and DB).  Hairpinning is avoided by using a DLR (distributed logical router) to route at the hypervisor level.  Using VXLAN encapsulates the traffic between hypervisor using a VTEP (VXLAN Tunnel End Point), so the requirement on the physical network is smaller.

Differentiate functional services delivered by a VMware NSX stack (kind of similar to the 3rd point on Objective 1.1)

  • Logical Firewall
  • Logical Loadbalancer
  • Logical VPN
  • Logical L2 switch
  • Logical L3 router

 

Objective 1.3 – Differentiate VMware Network and Security Technologies

Identify upgrade requirements for ESXi hosts

Hosts must be running ESXi 5.0 as a minimum, although unicast is only available with ESXi 5.5 and later.  Also, Distributed Firewall Rules on logical switch only applicable to hosts on 5.*** check

Hardware version 7 or later

VMware Tools 8.6 or later for vShield / guest introspection

Identify steps required to upgrade a vSphere implementation

To upgrade vSphere environments, refer to the usual doco; vCenter, hosts, etc

To upgrade NSX:

  1. 1.       Upgrade NSX Manager with an upgrade package
  2. 2.       Upgrade Logical Switches
  3. 3.       Upgrade NSX Firewall (vShield App needs to be at 5.5 minimum)
  4. 4.       Upgrade NSX Edge Devices
  5. 5.       Upgrade vShield Endpoint
  6. 6.       Uninstall and reinstall NSX Data Security
  7. 7.       “Upgrade” partner solutions (inverted commas as this probably entails reinstall through Services Composer).

Describe core vSphere networking technologies

vSphere Distributed Switch and vSphere Standard Switch – explained in more detail in section 3 of the blueprint

Describe vCloud Networking and Security technologies

                Predecessor to VMware NSX, although NSX advantages are:

  • A full RestAPI
  • VXLAN functionality doesn’t rely on Multicast on the physical switch
  • Dynamic routing on the Edge devices
  • Kernel based firewall and router (rather than vApp)
  • L2 Bridging between VXLAN and VLAN for physical device connectivity
  • Support for other hypervisors

Describe and differentiate VMware NSX for vSphere and VMware NSX for third-party hypervisors

vSphere NSX

vs

Multi-Hypervisor

vDistributed Switch

Switch type

Open vSwitch

VXLAN

Encapsulation type

GRE, STT, VXLAN

NSX Edge vApps

N/S traffic

Physical NSX Edge appliances

East-west in Hypervisor kernel

Security

Via ACLs / Security Groups

Load-balancing & VPN

Additional

N/A

Paul McSharry has done a great write up here http://www.elasticsky.co.uk/is-there-any-point-vcns-vs-nsx-v/

 

Objective 1.4 – Contrast Physical and Virtual Network Technologies

Differentiate logical and physical topologies

                Logical:

  6.png

Physical:

  5.png

Differentiate logical and physical components (i.e. switches, routers, etc.)

  • Logical – Happening within the hypervisor kernel VIB modules and ESG
    • Switching – happening within the VXLAN VIB module. Traffic between VMs across hosts happens across hosts’ VTEP interfaces.
    • Routing – happens either within the host VIB DLR module for east-west traffic (again, across VTEP interface for inter-host communication) or on the edge services gateway (ESG) for north-south routing. ESG deployed as a vApp.
  • Physical – running on any physical network infrastructure
    • Switching – L2 on physical devices, Traditional example - Cisco Nexus 2k
    • Routing – L3 on physical routers, Traditional example – Cisco Nexus 5k

Differentiate logical and physical services (i.e. firewall, NAT, etc.)

  • Logical – within the hypervisor kernel VIB modules and ESG
    • Firewall – firewalling happening at the VM and VNIC level, so policy enforcement is line-speed.  Allows for microsegmentation between services running on the same host hardware by allowing policy based access between VMs.  Can also occur on the ESG for North-South traffic.  A result of logical services model is that services can be implemented quickly with minimal cost.  Scales with the environment; more hosts = more DLR capability.
    • NAT – on the ESG.  Has two flavours; source NAT (SNAT) and destination NAT (DNAT).  Useful for multi-tenant environments where customers may want to use the same private IP range (or if customers have multiple clone environments (test, dev, UAT, prod) and keep services on the same ranges). 
    • LB - on the ESG.  Runs in software, can distribute load amongst (for example) web servers. Can scale up to 8 ESG’s per client/solution so has comparable scale to physical counterparts
  • Physical running on hardware devices
    • Firewall – physical devices sitting between separate protection zones (normally dictates separate vSphere clusters for DMZ, trusted, etc.)  Subject to hardware outage & requires expensive and inflexible solutions for highly available solutions.  Example – Juniper, CheckPoint, Cisco, etc.
    • NAT – Depending on requirement, on the firewall.  Again, used for hiding a range of IPs behind a single IP or making a service publicly available.
    • LB – Use of a device such as F5. 
    • All physicals come with additional costs associated (hardware purchase, rackspace, power, cooling, support, maintenance, training, licensing, etc).

Differentiate between physical and logical security constructs

  • Service Composer
    • Service composer used to allow 3rd party products to interact with VMs.  Typical example is AntiVirus.  Using the Service Composer allows NSX customers to automatically run actions based on the VM’s status (i.e., infected VMs can be quarantined, cleaned and once virus-free, can be returned to their original network).
  • Endpoint Security
    • Through VMware tools 8.6 or later, users are able to protect VMs without traditional agent-based AV scanning.  Offloading this to the hypervisor has the benefits that scanning occurs in realtime, putting less load on guest OSes.
  • Data Security
    • Identifies sensitive data in your environment based on regulation violation reports.  These can be PCI-DSS, HIPAA, plus localised (country and US state) regulatory numbers for driving licenses, social security numbers, healthcare numbers.  Also credit card numbers.  Will inspect documents that contain forbidden strings of numbers / letters based upon rules and log the event.

Objective 1.5 –Explain VMware NSX Integration with Third-Party Products and Services

Describe integration with third-party hypervisors

Entirely possible through the open vSwitch.  NSX is not limited to just VMware ESXi hosts, but can also run on Xen, KVM.  They don’t have a kernel module as VMware ESXi hosts would.  Require the use of open vSwitch.   NSX on purely a VMware deployment is often referred to as NSX-v to differentiate it.  Refer to Section 1.3 for a feature comparison.

Describe integration with third-party cloud automation

Integration with any CMP (Cloud Management Platform through RestAPI client for provisioning

Describe integration with third-party services

o Network services

Some vendors (such as F5 Load Balancers (code 11.4.*)) have support for a VTEP on their device to allow communication between NSX-v and their technology

o Security services

Through Services Composer, 3rd party products can integrate with NSX to offer additional features (AV scanning, IDS, etc).

Describe integration with third-party hardware

o Network Interface Cards (NICs)

o Terminating overlay networks

Manually register a third-party service with NSX

Through the Services Composer, 3rd party services can be manually added. Follow documented steps by the 3rd party

Install a third-party service with NSX

                Some 3rd parties  automatically integrate with Service Composer, so the install of the 3rd                       party product will automate the configuration of the Service Composer components

 

Objective 1.6 –Explain VMware NSX Integration with vCloud Automation Center (vCAC)

Knowledge

Describe integration with vCAC

Out of the box integration with vCAC as opposed to other CMPs which require RestAPI config work.  Allows vCAC application templates to consume compute, storage, networking and security in a service blueprint.  Can use the NSX L2 gateway for migration between vCAC DCs for DC consolidation, bursting to the cloud, etc.

Explain NSX deployment capabilities built into vCAC

Manage network and security virtualisation

  • Fabric admin can create network templates to define IP, DNS, WINS, DHCP / IP Pool
  • From which tenant admins and business group managers can define multi-machine templates with network adapters & load balancers

List NSX components that can be pre-created using vCAC

Taken from vcloud-automation-center-60-iaas-configuration-for-multi-machine-services.pdf:

“Tenant administrators and business group managers create NAT, routed, and private network profiles, virtual network adapters, and virtual load balancers, specify security groups, and apply the templates and specifications to the components in a multi-machine blueprint”

Describe Network Profiles available in vCAC

  • External Networks – existing physical networks on vSphere – Prereq for NAT & routed
  • NAT – Created at provisioning time – translate an external address to internal address or range of addresses.
  • Routed – Created at provisioning time – routable space between a range of IP subnets
  • Private – Created at provisioning time – internal only – no access to external or public networks.

Explain NSX preparation tasks that must be completed prior to attaching a network profile to a

Blueprint

                Create a multi-machine blueprint that contains at least one virtual component blueprint

Explain vCAC preparation tasks that must be completed prior to deploying a machine with on-demand network services

I couldn’t see anything alluding to this in either documents referenced by the blueprints; if anyone can shed any light on this, I will share.

Outcomes