Part 1: Installing the SCVMM provider and create the VSM VM Template (this post)
Part 2: Installing the VSM VM’s
Part 3: Create a configuration for the switch
Part 4: Add the VSM to SCVMM
Part 5: Create a VM to use the Switch
The Cisco Nexus 1000V is made from 2 components :The VSM
The VSM provides the management and control plane functions for the Cisco Nexus 1000V Switches. Much like a supervisor module in a Cisco Nexus 7000 Series Switch, the VSM provides the switch control and management plane to the network administrator, coordinating configuration and functions across VEMs. Unlike a traditional Cisco switch, in which the management plane is integrated into the hardware, on the Cisco Nexus 1000V the VSM is deployed either as a virtual machine on a Microsoft Hyper-V server or as a virtual service blade (VSB) on the Cisco Nexus 1010 or 1110 appliance
- The VEM
The VEM provides the Cisco Nexus 1000V with network connectivity and forwarding capabilities much like a line card in a modular switching platform. Unlike multiple line cards in a single chassis, each VEM acts as an independent switch from a forwarding perspective. The VEM is tightly integrated with the Microsoft Hyper-V hypervisor. The VEM is installed as a forwarding extension to the Microsoft Hyper-V extensible switch that runs in the Microsoft Windows server kernel.
Unlike with the VSM, the VEM’s resources are unmanaged and dynamic. Although the storage footprint of the VEM is fixed (approximately 6.4 MB of disk space), RAM use on the Microsoft Hyper-V host is variable, based on the configuration and scale of the Cisco Nexus 1000V deployment. In a typical configuration, each VEM can be
expected to require 10 to 50 MB of RAM, with an upper limit of 150 MB for a fully scaled solution with all features turned on and used to their design limits.
Each instance of the Cisco Nexus 1000V is typically composed of two VSMs (in a high-availability pair) and one or more VEMs. The maximum number of VEMs supported by a VSM is 64
First we are going to create the backbone of the switch by installing 2 VM’s to make the Switch HA.
Download the Nexus1000V from the Cisco site and extract the ZIP file
From the VEM folder copy the Nexus1000V-VEM-5.2.1.SM1.5.2b.0.msi to the SCVMM server (C:\ProgramData\Switch Extension Drivers) This is the installer SCVMM is using to deploy the VEM Module extension into the Hyper-V Switch on the Hosts
Copy from the extracted folder >VSM>the ISO to your MSSCVMMLibrary ISO Share.
From extracted folder copy the Nexus1000V-VSEMProvider-5.2.1.SM1.5.2b.0.msi to your desktop from the VMM Server.
Run the MSI (be aware it restarts the SCVMM Server service) :
On the VMM Server open a administrative Powershell and navigate to
Cisco made a script to create a VM Template for the VSM Servers we are going to deploy.
Run the Register-Nexus1000VVSMTemplate.ps1 :
You might recognize the the template VM has 3 network adapters. From the manual a statement about the usage:
The VSM is a virtual machine that requires three virtual network interface cards (vNICs). Each vNIC has a specific function, and all are fundamental to the operation of the Cisco Nexus 1000V. To define the VSM virtual machine properties, the vNICs require the synthetic network adapter.
- Control Interface
The control interface is primarily used for VSM high-availability communication between the primary VSM and the secondary VSM when high-availability mode is used. This interface handles low-level control packets such as heartbeats. Because of the nature of the traffic carried over the control interface, this interface is of most
importance in Cisco Nexus 1000V Switch. Some customers like to keep network management traffic in a network separate from the host management
network. By default, the Cisco Nexus 1000V uses the management interface on the VSM to communicate with the VEM. However, this communication can be moved to the control interface by configuring server virtualization switch (SVS) mode to use the control interface.
- Management Interface
The management interface appears as the mgmt0 port on a Cisco switch. As with the management interfaces of other Cisco switches, an IP address is assigned to mgmt0. Because Layer 2 communication is not supported between the VSM and the VEM on the Microsoft Hyper-V host as in VMware ESX, the management interface is
used for all VSM-to-VEM communication by default.
- Packet Interface
The packet interface is a traditional interface on the Cisco Nexus 1000V VSM for Microsoft Hyper-V.
Now deploy 2 VM’s from the new template that is available in the Templates.
Note: the NIC order list is as followed:
Adapter 1 = Control adapter
Adapter 2 = Management adapter
Adapter 3 = Packet adapter
When the deployment is done mount the nexus iso and start the machine.
in the next post we are going to install the VSM. Stay tuned!