Cisco Nexus 1000V Series Switch Installation and Deployment

This article is based of a the information found here: http://cvddocs.com/fw/Aug13-555
 
The Cisco Nexus 1000V Series Switch is a virtual distributed switch that runs in software on the virtualized host.
The value of using Cisco Nexus 1000V is that it extends the same Cisco Nexus operating System (NX-OS)
to the hypervisor that you are familiar with in the Nexus 5500 Series switches that make up the CVD data center core.
 
Using Cisco Nexus 1000V in your VMware ESXi environment provides ease of operation with a consistent Cli and feature set for the VMware distributed switch environment.
 
The Cisco Nexus 1000V Virtual Supervisor Module (VSM) is the central control for distributed Virtual ethernet
Modules (VEMs) in a typical modular ethernet switch, the supervisor module controls all of the control plane
protocols, enables central configuration of line cards and ports, and provides statistics on packet counts, among
other supervisory tasks.
 
In the Nexus 1000V distributed switch, the VSM controls the distributed virtual switches,
or VEMs, on the VMware servers.
 
You can install the Cisco Nexus 1000V VSM on a VMware ESXi host as a virtual machine, and you can install a
secondary VSM on a second ESXi host for resiliency. For the ultimate in resiliency and scalability in controlling a
Nexus 1000V environment, you can deploy the Cisco Nexus 1100 Virtual Services Appliance.
 
This design guide is based on the best practices for Cisco Nexus 1000V Series Switches:http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html
 


The following process installs the Cisco Nexus 1000V Series Switch on virtual machines.
 
Deploying Cisco Nexus 1000V VSM as a VM on an ESXi Host
  1. Install the first VSM
  2. Configure the primary VSM
  3. Install and setup the secondary VSM
 
This process walks you through deploying a primary and secondary Cisco Nexus 1000V VSM installed on
VMware virtual machines, for resiliency you will install VSM using an open Virtualization Format (ovf) template
provided in the download for the Cisco Nexus 1000V code. This process will use a manual installation method to accommodate an existing VMware environment where multiple network and management interfaces may exist.

Each Cisco Nexus 1000V VSM in an active-standby pair is required to run on a separate VMware ESX or ESXi host. This requirement helps ensure high availability even if one of the VMware ESX or ESXi servers fails.

It is recommended that you disable Distributed Resource Scheduler (DRS) for both active and standby VSMs,
whichprevents the VSMs from ending up on the same server. If you do not disable DRS, you must use the VMware
anti-affinity rules to ensure that the two virtual machines are never on the same host. If the VSMs end up on the
same host due to VMware High Availability, VMware DRS posts a five-star recommendation to move one of the
VSMs. The Virtual Ethernet Module (VEM) provides Cisco Nexus 1000V Series with network connectivity and forwarding
capabilities.
 
Each instance of Cisco Nexus 1000V Series is composed of two VSMs and one or more VEMs.
The VSM and VEM can communicate over a Layer 2 network or a Layer 3 network.
 
The Layer 3 mode is the recommended option, as it is easier to troubleshoot. Layer 3 communication problems between VSM and VEM. This design guide uses the Layer 3 and Layer 3 mode of operation for VSM-to-VEM communication.
 
Procedure 1 -> Install the first VSM
 
The Cisco Nexus 1000V VSM is a virtual machine that, during installation, creates three virtual network interface
cards (vNiCs):
 
The control interface handles low-level control packets, such as heartbeats, as well as any configuration
data that needs to be exchanged between the VSM and VEM. VSM active/standby synchronization is
done via this interface.
 
The management interface is used to maintain the connection between the VSM and the VMware
vCenter Server.
 
This interface allows access to VSM via https and Secure Shell (ssh) Protocol.

The packet interface is used to carry packets that need to be processed by the VSM.

 
This interface is mainly used for two types of traffic:
Cisco Discovery Protocol and Internet Group Management Protocol(IGMP) control packets.
 
The VSM presents a unified Cisco Discovery Protocol view to the network administrator through the Cisco NX-OS cli. When a VEM receives a Cisco Discovery Protocol packet, the VEM retransmits that packet to the VSM so that the VSM can parse the packet and populate the Cisco Discovery Protocol entries in the Cli.
 
The packet interface is also used to coordinate IGMP across multiple servers. For example, when a server receives an
IGMP join request, that request is sent to the VSM, which coordinates the request across all the modules in the switch.
The packet interface is always the third interface on the VSM and is usually labeled “Network Adapter 3” in the virtual machine network properties.
 
The packet interface is used inL ayer 2 mode to carry network packets that need to be coordinated across the entire Cisco Nexus 1000V Series switch. In Layer 3 mode this vNic is not used, and the control and packets frames are encapsulated in User Datagram Packets (UDP). With Cisco Nexus 1000V running in Layer 3 mode, the control and
packet frames between the VSM and the VEM are encapsulated in UDP.
 
This process requires configuration of the VMware VMkernel interface on each VMware ESXi host.
ideally, this is the management interface that the ESXi host uses to communicate with the vCenter Server.
 
This alleviates the need to consume another VMkernel interface and another IP address for Layer 3 communication between VSM and VEM. Cisco Nexus 1000V running in Layer 3 mode also eliminates the need for a separate Packet VLAN.
 
The control interface on the VSMs is used for high availability communication
between VSMs over IP, however it does not require a switched virtual interface on the data center core for
routing beyond the data center. ou must ensure that the control and management VLANs are configured on the upstream data center core switches. You can use the same VLANs for the control and packet however seperate VLANs 
provides for flexability.