Tuesday, November 22, 2016

Moving a service profile to a different UCS blade

Basic guide:
https://supportforums.cisco.com/document/29926/what-are-quick-steps-moving-service-profile-different-blade

First off, our scenario was a bit different.

Normally, one would simply disassociate the service profile and associate the service profile to the new blade....but this was a B200-M4 replacing a B200-M3.

You'll get an alert saying there is a BIOS issue if you try and associate the service profile..the issue here has to do with the host firmware package.  The default one being used in this case did not have a software package to support the M4 blade.

Since the service profile was created from the template, we can modify the service profile to point to a newly created host firmware package without impacting the other blades.

Once complete...it goes through the normal process of associating the SP to a blade...with some errors.  The KVM at this point is displaying the pre-POST message "Configuring and verifying memory."  There are also a number of errors...mainly that the memory and processors were using unknown or unsupported FRUs.

My first thought was to update the capability catalog..no change.

Then I found these release notes:

http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/release/notes/ucs_2_2_rn.html#pgfId-390503

The column on the left indicates the processor type with a minimum and recommended software version...we were running 2.2(6c).

Whoops!

As seen above, I updated the blade package on the host firmware package associated with the B200-M4 servers.  That way, as long as I associate the same host firmware package, it will use the correct version.  

Now, with the host firmware package "b200m4" updated, it re-associated the service profile, rebooted, and came up successfully.

Tuesday, November 15, 2016

Storage notes!

Intro

So far I've comes across/implemented FC, FCoE, iSCSI, and NFS.  Here are some basic notes of those technologies!

FCoE

Implementation on a Nexus 5K..

We typically will have two VSANs and two FCoE VLANs, for example:

With regards to UCS..your cabling should look something like this:



Note: These should be separate links than what was used to connect the FIs to the Nexus 5Ks for ethernet connectivity (Cross connecting to N5Ks & use vPC).

Now that we're cabled...

1. Enable the features we need:

feature lacp
feature npiv 
feature fcoe


2. Create the FCoE VLANs (304 & 305 on FAB-A & FAB-B, respectively):

Fabric A
vlan 304
name FCoE-VLAN_304 
exit

Fabric B
vlan 305
name FCoE-VLAN_305
exit


3. Create the VSANs (4 & 5 on FAB-A & FAB-B, respectively) and associate the VSAN with FCoE VLANs

Fabric A
vsan database 
vsan 4 
vsan 4 name General-Storage 
exit 

vlan 304
fcoe vsan 4 
exit

Fabric B
vsan database 
vsan 5 
vsan 5 name General-Storage 
exit 

vlan 305
fcoe vsan 5
exit


4. Create a port-channel containing physical member interfaces:

interface ethernet2/1 
description FCoE Link to FI-A eth1/33 
channel-group 33 mode active 
no shutdown 

interface ethernet2/2 
description FCoE Link to FI-A eth1/34 
channel-group 33 mode active 
no shutdown

Note: Perform the same action on FAB-B N5K


5. Configure the port-channel to trunk and allow the FCoE VLAN:

interface port-channel 33 
description FCoE EtherChannel Link to FI-A 
switchport mode trunk 
switchport trunk allowed vlan 304 
spanning-tree port type edge trunk

Note: Perform the same action on FAB-B N5k


6. Create a virtual fibre channel (vfc) interface, bind it to the port-channel we just created, and allow the VSAN associated with the fabric:

interface vfc 33 
bind interface port-channel33
switchport trunk allowed vsan 4 
switchport mode F 
no shutdown

Note: Perform the same action on FAB-B N5K


7. Verify..

Some useful show commands...

show flogi database
Use this command to see fabric logins.  We'll use this information to set up our zoning!  If you aren't seeing logins...check host connectivity.  For the case of UCS & the FIs, you'll see multiple FLOGIs on the same vfc.  
show zoneset active
Once the zones have been created and we see FLOGIs, I'll use this command to verify my zoneset has been commited.  If there is an issue with the FLOGI, you'll likely NOT see an FCID associated with the zone member.  If you set up your zoning and don't configure FCoE..you'll simply see the PWWN/Alias you configured but no FCID.
show interface vfc38
This will give you an indicator if your trunks are allowing your VSANs and the current state.  I had a scenario where the state was "initializing" on my FCoE VLAN.  Further investigation found that the host connected to the vfc and the respective physical interface had NOT performed a FLOGI.  A rescan did not prove to be helpful...but a reboot did :)

As seen here..our vfc is 38, bound interface is eth1/38, and we're allowing VSAN across the trunk and it is up:





8. Wait....what does my host using FCoE need to do?  

Well, when you configure your CNA/HBA to use FCoE, you'll need to tell it which FCoE VLAN to use.  The benefit to the CNA is that it can carry both FCoE traffic as well as ethernet traffic.

9. Lastly....zoning..I'm just not sure how to do it..



The gist: Each initiator needs to have access to ONLY have access to a target.  In the above example, we have 3 zones per fabric (3 initiators) and we've allowed the hosts access to both storage processors on our storage array.  Why?  Well, while the LUN on the storage array is "owned" by a storage processor, we want the host to have access to both storage processors, should one path become unavailable.


FC

Wait, why should I use FC?  Well---this one is open to debate.  Some people think that the requirement for FC is dying.  While we typically have 8Gb or 16Gb FC connections..there seems to be a race between FC and ethernet.  Back in the day, FC was the declared winner..but with ethernet capabilities allowing for 10Gb, 25Gb, 40Gb, 100Gb...its possible that we may no longer see native FC!  But for the time being...its pretty damn easy and straight forward to set up (why a lot of people care for it!)

That being said...cable that thing in the same way you would FCoE:
Furthermore, our N5Ks will have crossconnected paths to our storage like the FCoE diagram.  One thing worth noting....if we have a "UP" or Unified Ports capability 5K, changing the port type from ethernet to FC requires a reboot!  We also work left to right for ethernet and right to left for FC!

Now for the configuration..its pretty darn straight forward:

1. Enable the required features:

feature npiv 
feature fport-channel-trunk 
feature fcoe

2. Configure the port-channel (in this case its to the FIs)..With NPIV enabled, you must assign a virtual SAN (VSAN) to the SAN port channels that connect to the fabric interconnects:

interface san-port-channel 29 
channel mode active 
switchport trunk mode on 
switchport trunk allowed vsan 1
switchport trunk allowed vsan add 4

3. Add the SAN port channel to an existing VSAN database on the data center core Cisco Nexus 5500UP-A switch:

vsan database 
vsan 4 
interface san-port-channel 29

4. On the data center core Cisco Nexus 5500UP-A switch, configure the SAN port channel on physical interfaces. The Fibre Channel ports on the Cisco Nexus 5500UP switches are set to negotiate speed by default. 

interface fc1/29
switchport trunk mode on 

channel-group 29 force 
!
interface fc1/30 
switchport trunk mode on 
channel-group 29 force


Misc.


 As performed from an MDS switch.  Here we can see that the local domain ID of this particular switch is 0x91 and the peer switch is 0x63.  Any hosts that perform a FLOGI to this switch will be given an FCID with the 0X91 prefix.

For example...there is a host logged in on fc2/3.  The 0x910000 is the FCID for this host.  We can tell from this output that this device is directly attached to the SAN switch "MDS1."


From the output below, we can determine that the MDS switch, "MDS3" has two equal paths to the 0x91 domain, via fc2/13 and fc2/14.

Below, we can see the output of the show run on the SAN switch facing ports on a UCS FI.  As indicated, mode "NP" equates to node proxy.  What does this do for us?  Well, being that the FCID field is 1 byte long, there is the theoretical maximum of 255 FCIDs (note: there are some reserved values so the actual value is less than this).  Node proxy allows the FI to proxy FLOGIs in a fashion similar to NAT.


Below, NPIV (Node port ID virtualization) is the magic that allows us to have multiple fabric logins on an individual host-facing port and NPV (Node port virtualization) is the proxy portion.

iSCSI

To come..

NFS

To come..