-Build a working lab for DR and testing environments using what we had decomm’d from the datacenter.
-Build working VMware and Hyper-V clusters with iSCSI storage connections to an Equallogic Storage Array.
-Add capability for VLAN separated traffic in case we actually had to fire up some production virtual machines that would normally run in a separate datacenter.
The HP Flex-10 Ethernet Modules are built for a 10Gb backbone, and we have a 1Gb physical core in the lab, so we’re overdoing it a bit here. Still, with the throttling capability and the power of the enclosure backbone network, we should be able to get most out of the module.
With only 2 SFP’s connectors for the Flex-10, we only have 2 physical wires connecting the enclosure with the physical core. We’ll use a combination of VLANs, VCM Uplink Sets and Server profiles with the Multiple Network options to create what we need.
- HP c7000 Enclosure
- 6 Proliant BL490 G6 Blades
- 6 HP Blade Mezzanine Cards
- 1 Cisco 3560 GB Switch
- 1 Cisco 2950 10/100 Switch
- 1 Cisco 2960-S GB Switch
- 1 Equal Logic PS Series Storage Array
- 1 Flex-10 10Gb Ethernet module
- 2 SFPs for the Flex-10 Ethernet Module
- 1 HP On-Board Administrator card
- 2 Cisco 3020 Blade Switch for the HP Enclosure
|HP-OA or OA||Hewlett Packard Onboard Administrator. This web GUI is used to administer the Enclosure and all the Modules in the Interconnect Bays|
|HP-VCM or VCM||Hewlett Packard Virtual Connect Manager. The Web GUI is used to administer the Virtual Networking inside the Enclosure|
|LOM||LAN on Motherboard. These are the virtual network cards programmed into the Enclosure Backbone.|
|Flex-10 Module||Presents flexible virtual networking to the hypervisor software installed on each blade.|
|SFP||Small Form-factor Pluggable Transceiver. These connect to the Flex-10 module providing physical connections from the Flex-10 to the core switches.|
|Uplink Set||Used in HP-VCM to carry traffic from multiple VLANs through one or more physical wires.|
– A one Flex-10 module environment will work, no matter what others tell you. If redundancy is a requirement, you’ll have to add another.
– Be aware that with only one Flex-10 module, you’ll only have 4 of the possible 8 LOMs online. All eight will be presented to the hypervisor on the blade with only four online.
- Plan your VLAN. It rhymes, and it’s true. Map out your VLANs for the SAN and LANs. An example:
- Management VLAN- For management of the VMware and Hyper-V Hosts
- Traffic VLAN- For access to the HP infrastructure (OA, VCM) and all Virtual machines. This VLAN is typically the same one as the rest of your infrastructure if you want your lab to have the ability to communicate with the rest of the production network. Since you can control the VM access in your vSphere or Hyper-V virtual networks, it seems reasonable to enable that option and put them on the same VLAN.
- A separate logical/physical SAN. In this case the SAN is physically separated with the Cisco switches inside the HP enclosure and another switch connected to the storage array. The SAN typically has an alternate logical network too and such is the case here.
- Map the VLAN’s on the physical core switches and determine which ports you will use as the uplinks from the enclosure’s Flex-10 module.
- The Interconnect Bays Matter, a Lot- The VCM will not work properly unless the interconnect devices are in the correct slots. The slots are numbered from the top left:
1-2 (Flex-10 Ethernet modules)
3-4 (Flex-10 Fiber Channel modules)
5-6 (Cisco Enclosure Switches)
9-10 (Onboard Administrator Modules)
The Ethernet Flex-10 modules must go in slots 1 and 2. In my case, I just used slot 1.
3. Blade Mezzanine Cards- The Mezzanine cards allow the blades to establish network connections from the enclosure’s two Cisco switches. It also presents two “physical” NICs to the blade, which in turn are presented to the hypervisor. In this lab, the Cisco switches connect to the SAN switch allowing connectivity to the Equallogic storage. In the hypervisor we will use the Mezzanine NICs to create an iSCSI connection for storage.
If one of the Flex-10 modules is moved from one interconnect bay to another, it’s likely that you won’t be able to login to the VCM. In this case, the password has been reset to the default, which isn’t a standard password. The password can be found on the sticker with the barcode on it that was in the box the module came in. The username will be Administrator and the password an eight alpha-numeric code.
Setup and Update the Firmware for the Onboard Administrator
Setup the HP-OA by first connecting the module to slot 9 in the enclosure which will be the lowest left slot. Connect the HP-OA to the network with via Ethernet cable. The HP-OA will use DHCP by default to obtain an IP address. If DHCP isn’t available, you can manually configure the IP address by using the Insight Display on the front of the enclosure.
The setup wizard is fairly straight forward. You’ll need to assign a static IP, a NTP server for time and login accounts. The accounts can be local or AD based. I configured both.
Once the HP-OA is configured, you should update the firmware. These updates will improve the iLO connectivity, the VCM and the Backplane that connects the blades to the networking infrastructure inside the enclosure. Navigate here to begin updating:
Setup the Physical Switches
We had two SFP’s for the Flex-10 module which means we only have two physical connections to the core from the enclosure. Although we could use one of those to connect to the SAN via iSCSI, we plan on using the enclosure Cisco switches to connect to the SAN. In this example, the Ethernet cables connect the SFP’s to one access port and one trunk port on the same core switch. The trunk port will be used for multiple VLAN access from the enclosure; the access port will carry one VLAN for management. The Cisco switches are connected to the SAN switch on a separate network and will be accessed by the blades via the Mezzanine card.
Starting the Build
All of the following will be done through the Virtual Connect Manager (VCM) web interface.
Working with the Virtual Connect Manager (HP-VCM)
Connect the SFP’s into the X ports on the Flex-10 module card. Connect the Flex-10 module into slot 1. If you have two modules, connect the other to slot 2.
Here’s a very good detailed walk-through on setting up the HP-VCM using the wizards:
Understanding how the blades, the enclosure backbone, and VMware interconnect is important. The following blog entry by Kenneth Coleman will help:
Setup Ethernet Networks in the HP-VCM
Here we setup networks that will either communicate with each other inside the enclosure or outside via the Flex-10. In the VCM navigate to Connections–> Ethernet Networks.
- Choose the network name
- Choose the network options
a. A Private Network will not be allowed to communicate with any other. It is completely fenced off.
b. The Smart Link option provides link redundancy
c. The Advanced Setting opens bandwidth options for the network
- Choose the External Uplink Port. In this case, it’s either X1 or X2 which are the first two ports in the Flex-10 module with SFPs in them.
- Choose the Network Access Groups. You can put this new network in a group for organization and access. For the lab, I put all the networks in the “default” group.
- Click Apply and the network will be created.
- Repeat the process for each network you want to create.
Notes: You don’t have to connect an Ethernet Network to an External Uplink Port. The network will still function if the machines connected to it are on the same subnet within the enclosure. The enclosure backplane handles the internal networking in this scenario.
VLANs: Create all your VLAN networks in this area. Leave the External Uplink Port section blank and the Enable VLAN tunneling box unchecked.
Setup Shared Uplink Sets in the HP-VCM
A Shared Uplink Set is a specific connection used to carry traffic from multiple VLANs to the physical network outside the enclosure.
To create a Shared Uplink Set: navigate to Connections–>Shared Uplink Set
- Choose the Uplink Set Name.
- Choose the External Uplink Port via the Add Port box. This will be another physical port on the Flex-10 module with a SFP.
- Choose the Connection Mode. Unless you’re choosing more than one port, leave this at Auto.
- Add Associated Networks. Here’s where we add the VLANs created in the Ethernet Networks section. When the + sign is clicked you can add an already created VLAN Ethernet Network or create one fresh. You can multiple VLANs here.
** In VMware the VLAN tag will be assigned in the dVS port group.
Server Profiles Overview
The Server Profile adds the next layer of virtualization and presents “physical” NICs to the hypervisors. For review, each Flex-10 module presents 4 physical NICs to the hypervisor and the mezzanine card presents one. The HP-VCM auto assigns the NICs, and it assumes that there will be two Flex-10 modules in the enclosure. So, as Ethernet connections are added to the server profile, the HP-VCM will assign some to the non-existent second Flex-10 module.
This lab has one Flex-10 module and two mezzanine cards, so the hypervisors will be able to use six of the physical NICs presented (4 from the Flex-10 and one from each Mezzanine card). Keep in mind that the HP-VCM will present the maximum 10 NICs to the hypervisors. The unusable (not connected) NICs will show as “down” in all hypervisors.
Setup Server Profiles
Navigate to Connections–>Server Profiles
- Right Click anywhere in the empty filed and click Add.
- Name the Profile. Profiles are assigned to each bay, so it will be helpful to keep these names unique.
- The VCM will give you two connections to begin with. Click the Unassigned Network name to reveal a drop down menu. From here you may assign existing Ethernet Networks created prior. In this case, leave all Unassigned and move on to Step 4.
- For the sake of the hypervisors and taking into account the auto-assignment of these NICs by the VCM, adding 10 connections will give you all the connections needed.
- You may add iSCSI or HBA connections in the next section if your lab has the capability.
- Assign the profile to a Bay.
- Save the profile.
The profile cannot be saved while the blade is turned on. The Bay assignment along with inventory of the blade determines how the HP-VCM will map the network connections. The inventory is static, so you can turn the blade off to assign networks to HP-VCM assigned LOMs and Mezzanine interfaces.
Reviewing the columns left to right:Port: Auto assigned port in the VC
Network Name: These are the Ethernet Network created in the VCM.
Status: The status of the network configuration only.
Port Speed: Assign the speed of the network card. “Preferred” lets the enclosure decide via negotiation.
Allocated Bandwidth: The amount of bandwidth assigned to the network, there is 10Gb to split up.
PXE: Defaults to the BIOS for PXE boot options.
MAC: The MAC address of the Flex-10 virtual NIC
Mapping: Specifies the mapping on the Flex-10 and both Mezzanine cards to the physical NIC presented to the blade. For this lab, I have one Flex-10 card and it is seated in Bay 1 of the enclosure. So, I can only use the NICs in LOM1-a,b,c,d. Only those cards will look “online” in the hypervisors installed on the blades. Two Mezzanine cards are installed in the enclosure, so “Mezz2:1-Bay5” and “Mezz2:2-Bay6” are in play and will show as online in the hypervisors.
The enclosure backplane provides a 10Gb network backbone. You can divvy up the bandwidth per connection to maximize throughput. Unless your environment has a physical 10Gb backbone also, most of this will be lost since the physical connections have a limit of 1Gb.
To allocate bandwidth, click the port speed field of the network you wish to change and choose Custom from the drop down menu. Once Custom is chosen, a notepad icon will appear. Click the notepad and choose the speed of the connection.
From the graphic above you can tell that the lab has a 1Gb backbone. I have assigned the networks with external connections a maximum of 1Gb while using the rest for the internal backplane networks.
Internal Backplane Connections
The Enclosure backplane will connect the blades without external wires if they are on the same network. It’s the same concept as a virtual switch in VMware or Hyper-V. Since we’re using hypervisors on the blades, we’ll see these connections in the management interface of the hypervisor. For instance, we create a vSwitch in VMware and attach a network card that we know is connected to a LOM in the HP-VCM. The LOM has no external wired connections, but it is assigned a HP-VCM created Ethernet network. Now, if we put VMs on the hypervisor’s vSwitch they will be able to communicate via the backplane. This is important as we build a cluster inside one enclosure, since all vMotion or Live Migration traffic will take place via the backplane and never leave the enclosure.
The above is from the VMware cluster. Vmnic4 is attached to what looks like an online network. I am using this network for vMotion only. Below is the same vMotion network in the HP-VCM server profile for the blade that has the ESXi host installed:
I’m using this connection for vMotion, so as long as all the vmkernel NICs are on the same subnet, we can use the enclosure backplane for connectivity.
Assigning Networks in the Server Profile
The Uplink Set created in the Ethernet Networks section of the VCM is designed to carry traffic from multiple VLANs. This is where we set this up. When assigning a port in the Server Profile to “Multiple Networks”, specific networks need to be chosen. Click the notepad icon to reveal the networks available. All the networks presented in the menu have been created prior in the Ethernet Networks section of the VCM.
Drag and drop the networks you want to flow through the Uplink Set. You can have as many as 162 VLANs share the Uplink Set. Click the “Force the same VLAN mapping…” checkbox to enforce VLAN tagging from all the VLANs added. In this case, I’m using VLAN tagging inside the virtual distributed switch port groups in VMware.
In combination, these setting allow the VLAN tagging to pass through the Uplink Set to the physical switch trunk port, which has been configured to accept traffic from VLAN 19.
iSCSI Network Setup for VMware Lab
This is pretty simple. We’re using the Mezzanine NICs presented and connecting to the SAN network. In the VCM, leave these NICs “Unassigned” in the Server Profile (see the server profile example above).
In VMware, the Mezzanine NICs are clearly presented. Attaching one Mezzanine NIC to a separate vSwitch (or dVS) with a vmkernel adapter allows for iSCSI multipathing. Since the Mezzanine cards are mapped to the HP Enclosure’s embedded Cisco switches, which are connected to the SAN network, the Equallogic becomes accessible.
Lab Network Map
For this lab, I have three defined networks. The VCM specifications are below:
1. Management – Assigned to the X1port in the Flex-10 and connected to the access port on the core switch for the management VLAN only.
2. VLAN 19 – Assigned to an Uplink Set for multiple VLAN traffic. The Uplink Set Ethernet cable is connected to a trunk port on the physical core switch.
3. vMotion- This is an internal network only. No external wires are needed since it’s on a separate subnet carried by the backplane.
From here the lab should be up and running.
Categories: IT Pros