iSCSI Storage + Cisco UCS + vSphere 6= FTW!

Today I’ll be going over a bit of a mishmash of tech being put together to form a disaster recover/business continuity site. The newest equipment are the Cisco UCS 6200 chassis, three B200 M4 blades and Fabric Interconnects . The rest of it is standard older equipment: 1Gb switches, a Cisco 6509 router and an Equallogic storage array.

The goals are to enable iSCSI storage traffic over 1Gb links and have the storage available to VMware hosts installed on the M4 blades. I wanted redundant connections, one per Fabric. The hardware infrastructure diagram:

iSCSI-UCS Infrastructure– Color Codes: Green- 1Gb Ethernet to SAN, Red- 10Gb VIC connections, Black- Fiber connections to LAN

1. Check your SFPs. The SFPs allow the FI ports to do anything from a cable standpoint, such as determining what cable to use and what speed you can run. For instance, the SFP used to run the 1Gb ethernet connection from the FI’s to the Cisco switch is different than the SFP connecting the FI’s to the Cisco router with a fiber cable. Some SFPs can only run at a certain speeds, so make sure that if you have a 10Gb SFP and need it to run at 1Gb (like I did) that it can actually do that (mine couldn’t and I had to use others). Cisco support and their interoperability matrix should help.

2. Configure the upstream and downstream ports as Uplink. The SAN connections are considered upstream (green lines in the diagram), while the fiber connections to the Cisco 6509 router are downstream (black lines). The FI’s cannot pass any traffic out of the chassis without the Uplink ports.

3. Throttle down the ethernet link to the switch(es). The UCS will default the SFP connection to 10Gbps. The switch I’m connecting to is a one-gig forcing a manual speed change. In the UCS Equipment section, expand the FI’s Ethernet port section. Click the port that needs to change speeds and choose Show Interface in the General tab. In this case, I had to do the same thing for the fiber connections from the FI’s to the Cisco 6509 since that router has only 1Gb GBIC’s. Once the port speeds on each side of the cable match the port will light up in the UCS.

4. Verify the Fabric Interconnects are in End Host Mode for Ethernet and FC Modes. This is true by default. This setting is in the General tab of the Equipment–>Fabric Interconnects–>Fabric Interconnect A, B.

5. Build a L2 Disjoint Network in the UCS. This type of network is needed when you have multiple networks with no adjacency to each other. The L2 designates the Layer 2 networking level, Data Link, which moves data across physical uplinks. In my configuration, the SAN is only accessible via hardware uplinks (L2) and has no Layer 3 routable path (VLAN), thus no adjacency to any other network in the UCS. We need the UCS to force iSCSI traffic over the two uplinks that are connected to the SAN which is what the L2 Disjoint Network was designed to do.

5a. Build a VLAN in the UCS designated for iSCSI traffic. The UCS uses VLANs to label networks, so there has to be one to use in the L2 Disjoint Network. In my network config the storage array doesn’t require VLAN tagging so it isn’t expecting traffic over VLAN 999. On the SAN Cisco switch there are two VLANs, the default (1) and one labeled for the iSCSI traffic (999). The switch ports in VLAN 999 are connected to the storage array and the UCS uplink ports. I labeled my VLAN on the UCS the same as on the switch (999) just so I knew what it was for. The point is, you don’t have to match the VLAN numbers because there isn’t any 802.3 VLAN tagging happening on the FI’s (and I’m not tagging it at the VMware network level either). The FI’s are in “End Host” mode which regulates their functionality to Layer 2 only. The UCS VLAN naming is cosmetic in this config.

In the LAN tab expand LAN–>LAN Cloud and right click VLANs. Choose to Create a VLAN. In the Create VLANs window, choose the name and the VLAN ID. Keep the default values (Common/Global, so both FI’s have the same configuration and keep the Sharing Type at None)

VLAN Creation Config

VLAN Creation Config

5b. Build a VLAN Group in the UCS. Here we group the VLAN with the actual FI uplinks that connect to the SAN switch. I have port 2 on each FI hardwired to the SAN switch. In the VLAN Group, I’ll add the VLAN I created in step 5a and port 2 of each FI. In the LAN tab expand LAN–>LAN Cloud. Right click the VLAN Group and choose Create VLAN Group.

Add the created VLAN to the group.

Add the created VLAN to the group.

Add the Uplink ports on the UCS that connect directly to the SAN switch.

Add the Uplink ports on the UCS that connect directly to the SAN switch.

If you have more than one connection on either fabric and want to add them to a Port Channel, make that configuration change prior to adding the ports to this group. What I have done here is create a VLAN and assign it two uplink ports, guaranteeing that traffic assigned to that VLAN will go over the correct uplinks. Now I have to make sure the vNICs on the VMware host designated for iSCSI traffic use those two FI uplinks attached to the SAN.

6. Use UCS Templates to assign vNIC’s to the VLAN. I will use the vNIC Templates section in the UCS to designate my iSCSI VLAN on two vNICs that will be created and attached to the VMware host installed on each blade. This is a logical process because now I’m using the freshly created VLAN Group. By designating the VLAN on the UCS created vNIC it ensures traffic from that vNIC will flow through the uplinks assigned to the VLAN Group.

vNIC TemplateAbove pic: I’ve highlighted the 5th vNIC that will be assigned to the VMware host. I will use vNIC 4 (FI-Fabric A) and 5 (FI-Fabric B) as independent, redundant connections to the iSCSI storage at the VMware host level. At the UCS level I assign the iSCSI VLAN created prior. I’m also assigning it as the native VLAN since this is the only path it will ever use. By doing this, any traffic from the VMware host from vNIC 5 or 6 will automatically go through the uplink ports in the UCS VLAN Group.

So far, the way UCS uses VLANs can be very confusing, for me anyway, as you have to suspend your networking Layer2 and 3 knowledge. Well, maybe change the way you think about it is a better term. Now that I’ve setup the iSCSI traffic path in the UCS, the VMware setup is pretty simple.

7. Create VMware VMkernel vNICs and assign them to the correct UCS created vNICs. Lots of fricking vNICs going on right now, huh? You bet. This is the home stretch though and if all is setup correctly, the VMware host will be able to scan and retrieve all the LUNs assigned to it on the storage array. The example below is using a VMware vSS as the site is in its infancy.

VMkernel Config 0VMkernel Config 1The VMkernel vNIC has been attached to vNIC4 on the VMware Host using the UCS vNIC Template named vnic4-iSCSI-A (Fabric-A). I created another vSS and another VMkernel vNIC to attach to the UCS created vNIC5 since that vNIC is on Fabric B. I wanted redundant connections with one on each UCS Fabric (FI).

By using the templates these vNICs are already assigned to VLAN 999 (iSCSI VLAN), so all traffic from these will automatically be diverted to the two FI uplink ports that connect to the SAN.

The VMkernel has been assigned an IP on the same subnet as the storage array. We’re going over the new UCS L2 Disjoint Network so I have to be on the same subnet.

8. Create an iSCSI adapter on the VMware Host and bind the appropriate vNICs to it. On the VMware host go to the Configuration section, choose Storage Adapters and click Add. The path to that section will differ depending on what VMware client you’re using, fat or web. Add the iSCSI adapter then open its properties. In the Network Configuration section add the two VMkernel vNICs.iscsi-ad-0In the Dynamic Discovery section, add the IP address, port number and authentication protocol for your iSCSI array. After a re-scan of the iSCSI adapter the assigned LUNs should appear on the host

9. Troubleshoot with VMKPING. If the connection to the storage fails, test the IP connection with the vmkping tool. Enable SSH on the host then use Putty to access it. After logging in as root, send ICMP packets to the storage array from a VMkernel vNIC on the same subnet. For instance, my VMkernel vNICs assigned to the iSCSI adapter are vmk1 and vmk2. An example vmkping would like like this:

vmkpingThe syntax: vmkping -I <interface name> <target IP address>

If the pings fail…well, you know how many things it could be.

Advertisements


Categories: Cisco, VMware

Tags: , , , ,

1 reply

Trackbacks

  1. Boot Cisco Blades Over iSCSI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Brad Hedlund

stuff and nonsense

MyVirtuaLife.Net

Every cloud has a silver lining.

Live Virtually

A Storage and Virtualization Blog

Virtualization Team

VMware ESX/ESXi - ESX server - Virtualization - vCloud Director, tutorials, how-to, video

www.hypervizor.com/

Just another WordPress.com site

VirtualKenneth's Blog - hqVirtual | hire quality

Virtualization Blog focused on VMware Environments

Virtu-Al.Net

Virtually everything is POSHable

Gabes Virtual World

Your P.I. on virtualization

Yellow Bricks

by Duncan Epping

Wahl Network

Technical Solutions for Technical People

Joking's Blog

Phoenix-based IT guy with too much to say...

%d bloggers like this: