Quick and easy way to get around the CIsco UCS - Login Error: java.io.IOException: Invalid Http response Login Error: java.io.IOException: Server returned HTTP response code: 400 for URL: http://x.x.x.x:443/nuova

Hey Guys!


You might have experienced the exceptionally annoying error with Cisco UCS and the latest update to Java (and apparently the fault is a java programming fault not Cisco UCS) where after trying to login to Cisco UCS you get the following error:

Login Error: java.io.IOException: Invalid Http response
Login Error: java.io.IOException: Server returned HTTP response code: 400 for URL: http://x.x.x.x:443/nuova


Here is a quick and easy work-around to enable HTTP via the SSH command shell, you can then HTTP into the device instead of HTTPS and at least that will get you on there quickly instead of having to stuff around with Java,

naturally I would recommend disabling HTTP since everything is transmitted in clear text once the issue is resolved

UCS# scope system 
UCS /system # scope services 
UCS /system/services # enable http 
UCS /system/services # disable http-redirect 
Warning: When committed, this closes all the web sessions. 
UCS /system/services* # commit

CCIE DC: How I passed the CCIE DC

Hi Guys!

As per the subject i passed my CCIE DC and in this blog post I will describe the process that worked for me. What worked for me might not work for you! So keep that in mind.

OK I am going to assume in this post you have already passed the CCIE DC Written and are looking at the lab.

There are several aspects to the CCIE DC Lab that are worth considering, the first is: Lab Rack availability, at the moment at least, is quite sparse, until this situation improves, I recommend at the beginning of your studies for the lab (and assuming you studied hard for the written) I would say make a lab booking within 4 to 6 months from when your ready to knuckle down and start studying hard.

The other aspect is Rack Rental availability from Vendors such as INE and IPExpert is also quite tight at the moment., which makes things difficult, in this blog post we will go through some of the stratergys to help with that situation.

Before I start getting too much into depth, let me take a quick chance to address the "Big two" of vendor training, INE and IP Expert.

I am absolutely not a fanboy by any strech of the imagination: I will use products from either vendor to achieve my goal. I just want to pass, each company has pro's and con's and by combining both, your more likely to get a nice mix. Don't dismiss other options such as traditional training from Cisco Vendor Partners like Firefly, Fastlane etc. It was through a firefly Course that I met a very good friend of Mine Adriaan Steyn,Believe me when i tell you he taught me PLENTY of great info about the Nexus line of switches, I also did a UCS Course with Housley Communications and that too was great training.

Good training, like good products is not restricted to a certain vendor: You might love Sony Headphones but hate Sony TV's for example, don't restrict yourself to a "Oh my training partner is XYZ", That is dumb.  Combine all the training you can, learn from whoever you can.

(Full Disclosure: I am likely to be doing training material for a training vendor at some point in the future.)

OK let's start talking about the study method I used.

First of all, You need to learn the protocols and topics on the blueprint, that much should be obvious, to study, i hand write all my notes, i always hand write my notes, never type: your brain is wired through thousands of years of evolution to hand write, you absorb the information more readily than typing, it is too easy when typing notes to just type and not really think about what it is your typing (especially if like me you can touch type). Hand writing forces you to slow down and the fact that you have to write it, means invariably you will reword it to make it shorter to write, meaning you will understand it better. Plus it's much much easier to draw a network engineers favorite tool: a diagram.


So, I got a book, I printed a copy of the blueprint out and stuck it on the inside of the book and started watching the INE video's, I went through every single one, hand writing notes as I went along. I started with the topic that most interested me and was newest to me, storage. (Incidentally, many idiots will tell you that storage is not a big part of the CCIE DC: They are wrong, it's a huge part, so many things depend on storage that if you don't know it backwards you will not pass)

Starting on a new topic got me enthusiastic and gave me interest I needed to stay motivated.

I then bought myself two Cisco MDS 9216i Switches, I cannot recommend this enough, you SHOULD buy these switches for your own home lab, they are about 500 bucks on ebay (I am selling mine if anyone is interseted, contact me peter@peterrevill.com), I am honestly not just pimping them because I am selling mine: it's INCREDIBLY useful to be able to test the storage stuff in a limited fashion in your own home, and for such a low price this is one of the few things in CCIE DC you can play with yourself. Be warned: they are VERY noisy. Make sure you get ones with Generation 2 modules, Generation 1 modules CANNOT do certain types of port channels and it can become very frustarting.

So now I was watching the INE Storage videos, taking notes, then testing on my home lab equipment what I could, certain topics like FCoE I couldn't test so at that point I would do rack rentals from either IPexpert or INE: Both of them have there pro's and con's, with IPExpert you get access to a FULL lab of the ENTIRE CCIE DC, _but_ the availability is quite low at the moment, with INE (and this may have changed, you would be best off asking them), you don't have access to the whole N7k yourself so you can't make your own VDC's for example, and you can't create a storage VDC, I also _BELIEVE_ that the current N5k is a 5010 or 5020 and not a 5548 which also limits some of the things you can do. You can still do a HECK of a lot with the 5020's, but if your desperate to make sure you have the exact same equipment as the DC Lab then you will want to get yourself some IPExpert time.

Speaking of the rack rental systems, I must say; the INE rack rental system that you use for booking and organizing etc is _EXCELLENT_. I did make a humble solution for improvement on it but all in all it is excellent, you can tell they have spent a lot of time working on it and it shows.


So now I am busy going through this topic and at this point I have not done any workbooks or labs, just going through the theory, testing it out myself. I started posting on my blog as well as this was going on. I strongly recommend starting a blog: being able to successfully communicate a concept to someone else means that you UNDERSTAND that concept.  Plus you can meet some great people through your blog, and it's a great way to organise your own thoughts and opinions.

So I kept this process up through all the individual topics, UCS, Virtualization, Ace, pretty much all the video's provided by INE I went through and studied each of those topics. Once this was finally done, it was time to start the practice labs.

Each vendor has technology labs and full scale mock labs and both have as I mentioned Pro's and Con's, I can't recommend enough doing both. You will always learn more. Data Centre, being a new topic has a few people with somewhat of a misunderstanding of some topics, this is probably the first CCIE I have ever done where I have _NOT_ been able to take whatever is said in the workbook and solution guide as gospel: the technologies are just too new, for some topics you really do have to study yourself, if the answer is unclear go ahead and check it. Be careful listening to other people, many people out there post flat out wrong answer to questions, the best way and only way in my opinion is to _TEST IT YOURSELF_

Perfect example, During my bootcamp there was a discussion about vPC and what kind of BPDU's vPC would send, which switch (primary and secondary) would send BPDU's and what commands you could do to tweak it, so instead of everyone arguing and trying to show how big there brain was, we said let's do a debug spanning-tree bpdu event or whatever the command is and CHECK.

_VERIFY_, _VERIFY_, VERIFY!

There is SO much misinformation out there that you just HAVE to check for YOURSELF what works and what does not, how the technology works, this is how i try and structure my blog: I theorize how I believe a protocol should work or something should work, then I test my hypothesis, and I humbly submit my findings to you, my blog reader. It's a little thing you might have heard of called the scientific method: Adapt this in your studies, trust nothing, verify it yourself, prove to yourself you have an understanding as to exactly how it works.

To compliment my study regime, I started reading quite a few books, my friend Ron Fuller and David Yansen have a great Cisco Nexus book, for example, in no particular order I read the following books pretty much from cover to cover.

(Full Disclosure: If you buy the books below using the link on my webpage here I get a small percentage of that sale, it does not add any cost to you for the price of the book, I would consider it a favour if you buy the book to please use my links below.)

The first is a Nexus topic book, this was a great great book. There are some very good topics covered very well here and it really complimented the INE videos and helped me fill in the gaps of my knowledge. If you only buy one book for the CCIE DC studies make it this one.



NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures (2nd Edition) (Networking Technology)
The second book is a great book on Cisco UCS, It's a little dated at the moment but I believe a second edition is on it's way
Cisco Unified Computing System (UCS) (Data Center): A Complete Reference Guide to the Cisco Data Center Virtualization Server Architecture (Networking Technology)
This book Is also highly recommended in the CCIE DC community, I must confess I have not got a copy myself but everyone who has read it raves about it:
Data Center Virtualization Fundamentals: Understanding Techniques and Designs for Highly Efficient Data Centers with Cisco Nexus, UCS, MDS, and Beyond
The final book I will recommend is a multicast concept book, Multicast is an important part of the CCIE DC and this book is extremely well written, actually kind of funny to be honest, the author is a funny guy and cracks jokes the whole way through, multicast was a mystery to me until i read this book for my CCIE R&S, it is now a tiny bit dated but multicast has not changed all that much so was a great resource for me. Developing IP Multicast Networks, Volume I (paperback)
At this point i was ready to begin discussing my thoughts on protocols with my friends and collegues studying for the same exam (or performing Data-Centre related work out in the field.) This is a crucial step: your friends will challenge your knowledge, if you don't know a topic well and they do, they will unveil this weakness to you: My friends and I did this to great success, we would give each other hairy questions and make friendly wagers on how a particular technology is meant to work, like how does vPC peer-switch actually work.  I confess to losing quite a few of these wagers!


I became active in the CCIE DC Facebook Group which you should definately join if your on this quest, you will get some great questions and tips and tricks for all your Data Centre work in this great little community. Respect the rules and join! (Full Disclosure: I am an admin of this facebook group, but I receive no compensation)

I also became active on twitter and tried my best to help people out there: Good questions can help you understand a technology better, but be careful not to claim you have an answer to a problem if you don't. Try not to mislead people! Too much misinformation out there.


For rack time, A great tool is the Cisco PEC: Use it! Remember, even though they have particular labs, if the equipment matches what your trying to learn your NOT restricted to JUST doing the labs they list in the PEC, the PEC is a valuable resource: use it.

So after I finished up the technology labs and a few mock labs I went on my CCIE Data Centre Bootcamp with Ipexpert, I always do this as my final preperation of study: The bootcamps should be your final step, don't go to a bootcamp as one of your first steps, it will be wasted on you. You can't possibly cram the massive amount of information you need to know and massive topic list into just 2 weeks or a week or however long you have, your bootcamp should be about solidifying everything and making sure you know what's going on. A sign your on the right track is throwing out great tricky questions to your lecturer.

After this I was ready for my first attempt and took it in brussels, while I was quite close, I failed, I was absolutely distraught: I really thought I had nailed it, I finished quite early, my mistake was not reading the questions carefully. I was humbled by the first attempt, it showed me that although I might have most of the technologies understood and I definitely have the speed i did NOT have the careful reading that is required of this CCIE DC exam.

I quickly booked a second attempt and during the time between first attempt and second I just studied my notes again, kept everything fresh as I already had the knowledge I just didn't have the careful reading required of the exam. I managed to pass the second time after being careful.

I hope this helps someone out there, to all my loyal readers and people with kind words: it honestly makes me SMILE from EAR TO EAR when I know that one of my blog posts helped you! I like to help people it's in my nature, I wish you all the luck in the world in your quest to obtain CCIE DC.





IP Expert running a free ipad competition

Hi Guys!

IPExpert are running a competition to win a free ipad. check it out below.



If you’re interested in winning a free iPad mini? Just follow us on Twitter or “Like” our Facebook page and be entered into a weekly drawing! A winner will be announced every Friday when we post our success blog!


CCIE DC: Definitive Jumbo frames

Hi Guys!

This blog post is attempting to be the DEFINITIVE guide on Jumbo MTU, It's a topic that DOES MY HEAD IN!

There are SO many possible iterations, there's

MTU with Nexus 5k
MTU with Catalyst Switches
MTU with MDS
MTU with Nexus 7k
MTU on SVI Interfaces
MTU on physical interfaces
MTU on UCS (Both on the FI itself and the vNIC'S)
MTU on C-Series
MTU on Nexus 1000v

What interaction does MTU have on VPC, Fabric Path, Port Channels, OTV? Routing Protocols?

What interactions does MTU have with FCoE? FC? SAN-Port-Channels?

MTU on FC?
Why 9216 vs 9000?


MTU on Nexus 7000

So this is the most complicated for me at least of the jumbo MTU discussion, let's start at the beginning:

A Nexus 7000 by default, has the following command configured:
system jumbomtu 9216


Under a VDC, I could not remove this command:

N7K5(config)# show run all | inc jumbo
system jumbomtu 9216
N7K5(config)# no system jumbomtu 9216N7K5(config)# show run all | inc jumbo
system jumbomtu 9216N7K5(config)# show run all | inc jumbo
system jumbomtu 9216
N7K5(config)#


I am not sure if this is a limitation of VDC's or simply something I am doing wrong, but regardless I was unable to turn off this command.

This fits with the story I have always heard, that a Nexus 7000 is enabled for jumbo frames, BY DEFAULT. And that you do not have to do ANYTHING, but of course it's a little bit more complicated than that.


If you look at the interfaces you have with a show run interface command, you will see an interesting default:

#show run all | beg 1/17

interface Ethernet1/17
 (Output omitted)
  mtu 1500
  snmp trap link-status
  logging event port link-status default
  logging event port trunk-status default
  bandwidth 10000000

 (Output omitted)
!

The MTU command is on ALL interfaces when you do a show run all, what the hell is it? Does it over-write my system Jumbo MTU? Do I have to set it too?

If you look at an interface with show int, you annoyingly see exactly the same thing:


N7K5(config-if)# show int eth1/17 | inc MTU
  MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec
N7K5(config-if)#


Yet we are lead to believe by certain blog posts that because we have the system jumbo MTU command we don't have to do anything.

Let's try not changing the value, and doing a ping, and seeing what happens


N7K6# ping 169.254.1.1 df-bit packet-size 8000
PING 169.254.1.1 (169.254.1.1): 8000 data bytes
Request 0 timed out
^C
--- 169.254.1.1 ping statistics ---
2 packets transmitted, 0 packets received, 100.00% packet loss
N7K6# ping 169.254.1.1 df-bit packet-size 1500
PING 169.254.1.1 (169.254.1.1): 1500 data bytes
Request 0 timed out
^C
--- 169.254.1.1 ping statistics ---
2 packets transmitted, 0 packets received, 100.00% packet loss
N7K6# ping 169.254.1.1 df-bit packet-size 1472
PING 169.254.1.1 (169.254.1.1): 1472 data bytes
1480 bytes from 169.254.1.1: icmp_seq=0 ttl=254 time=1.273 ms
1480 bytes from 169.254.1.1: icmp_seq=1 ttl=254 time=0.881 ms
1480 bytes from 169.254.1.1: icmp_seq=2 ttl=254 time=1.182 ms
1480 bytes from 169.254.1.1: icmp_seq=3 ttl=254 time=1.179 ms
1480 bytes from 169.254.1.1: icmp_seq=4 ttl=254 time=1.185 ms

--- 169.254.1.1 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 0.881/1.139/1.273 ms


No jumbo frames for us! Maximum we could do is 1472.

Let's show that we just have the default config:



N7K6# show run int eth1/17

!Command: show running-config interface Ethernet1/17
!Time: Sun Jun 30 05:50:14 2013

version 6.0(2)

interface Ethernet1/17
  switchport
  switchport mode trunk
  no shutdown

N7K6# show int e1/17 | inc MTU
  MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec




So! Right now we can see clear as day: if we don't change the MTU value, it does not appear to work!



So let's try changing this value and see what happens

N7K5(config-if)# int eth1/17
N7K5(config-if)# mtu 9216
 

Let's see if we change it to 9216 if our output has changed:

N7K5(config-if)# show int eth1/17 | inc MTU
  MTU 9216 bytes, BW 10000000 Kbit, DLY 10 usec

 

It has changed the output, do our pings work now?


N7K6(config-if)# exit
N7K6(config)# exit
N7K6# ping 169.254.1.1 df-bit packet-size 1472
PING 169.254.1.1 (169.254.1.1): 1472 data bytes
1480 bytes from 169.254.1.1: icmp_seq=0 ttl=254 time=1.457 ms
1480 bytes from 169.254.1.1: icmp_seq=1 ttl=254 time=0.86 ms

--- 169.254.1.1 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 0.77/0.952/1.457 ms
N7K6# ping 169.254.1.1 df-bit packet-size 1500
PING 169.254.1.1 (169.254.1.1): 1500 data bytes
1508 bytes from 169.254.1.1: icmp_seq=0 ttl=254 time=1.341 ms
1508 bytes from 169.254.1.1: icmp_seq=1 ttl=254 time=1.067 ms

--- 169.254.1.1 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 1.067/1.235/1.343 ms
N7K6# ping 169.254.1.1 df-bit packet-size 8972
PING 169.254.1.1 (169.254.1.1): 8972 data bytes
8980 bytes from 169.254.1.1: icmp_seq=0 ttl=254 time=1.872 ms
8980 bytes from 169.254.1.1: icmp_seq=1 ttl=254 time=6.154 ms
Request 2 timed out



So the RESULTS ARE IN!  Despite all the "Experts" on the internet claiming otherwise, who lead to me being lead astray myself, the Nexus 7000 system jumbo mtu command is NOT enough.  on M interface cards, you _MUST_ set the MTU


Regardless of L2 VS L3 Interface

 
Here is absolute proof:

http://www.cisco.com/en/US/docs/switches/datacenter/sw/5_x/nx-os/interfaces/configuration/guide/if_basic.html#wp1105874

I tested it myself, see below

N7K5(config-if)# mtu 1600
ERROR: Ethernet1/17: MTU on L2 interfaces can only be set to default or system-jumboMTU



You can only set the MTU to either your configured jumbo MTU size, or the normal MTU size on L2 interfaces, l3 interfaces you can configure what you want.


N7K8(config-if)# no switchport
N7K8(config-if)# mtu 4444

N7K8(config-if)# 

 Note you can change the MTU individually on both the M line cards OR F Line cards, but your better off with M line cards using the system-QOS, as we will see below


  What about a system-QOS Class?

Guess what, all the above changes when it comes to F-Based Linecards. Although you can manually set the F line card MTU just like you can on the M line cards and it will work, you can change it globally using network-qos:


http://www.cisco.com/en/US/docs/switches/datacenter/sw/6_x/nx-os/qos/configuration/guide/nt_qos.html


F-Based line cards require you to change the System-QOS Class just like on 5k, but unlike a 5k, on a 7k F1 line-card  if you change it, the value is changed under the interface:




SW1-1(config)# show int e4/1

Ethernet4/1 is up

  Dedicated Interface

  Hardware: 1000/10000 Ethernet, address: c464.1348.b2d8 (bia c464.1348.b2d8)

  MTU bytes (CoS values):  MTU  9216(0-2,4-7) bytes  MTU  2112(3) bytes

  BW 10000000 Kbit, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255
!




Here is the config that was applied to make this happen:




policy-map type network-qos default-nq-7e-CCIE

  class type network-qos c-nq-7e-drop

    congestion-control tail-drop

    mtu 9216

  class type network-qos c-nq-7e-ndrop-fcoe

    pause

    mtu 2112


 

 So you can see that it matches up. So it makes it quite a bit easier to see what is going on! Great!

Here is a handy command on the Nexus 7k to see what is going on too, in the below example, I have an N7k which does NOT have any QoS config, because I was using a rack that you only have access to a single VDC on.

N7K8# show system internal qos network-qos hw-config module 2 | inc MTU
MTU         = 1500 [FCoE: No] - This line shows the MTU Value for CoS 0 on this hardware
MTU         = 1500 [FCoE: No] - This is MTU value for Cos 1
MTU         = 1500 [FCoE: No] - Cos 2
MTU         = 1500 [FCoE: No] - Cos 3
MTU         = 1500 [FCoE: No] - Cos 4
MTU         = 1500 [FCoE: No] - Cos 5
MTU         = 1500 [FCoE: No] - Cos 6
MTU         = 1500 [FCoE: No] - Cos 7
Interface    Config  Oper(VLs) MTU (value)





MTU on Nexus 5000

The Nexus 5000 is probably the simplest of all in terms of what we can and can't do with Jumbo MTU's.

The thing about the Nexus 5k is that it shares a similar architecture with the Nexus 7000 F1 Line cards, so thus it does not suprise me, that for Nexus 5K the only thing you have to do, to enable jumbo frames, is change the system QOS Class

policy-map type network-qos jumboMTU5k
  class type network-qos class-fcoe
    pause no-drop
    mtu 2158
  class type network-qos class-default
    mtu 9216



Just like on the Nexus 7000, if your working with an L3 interface, you must set the MTU manually


The annoying thing about the configuration on nexus 5000, is that the interface will still show the MTU as 1500:

N5K5(config-if)# show int eth1/6 | inc MTU
  MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec



This is a known bug:

CSCsl21529, Symptom: An incorrect MTU value is displayed in the show interface command output. The Cisco Nexus 5000 Series switch only supports class-based MTU. Per-interface level MTU configuration is not supported. The switch supports jumbo frames by default. However, the show interface command output currently displays an incorrect MTU value of 1500 bytes. 
But if you check the queuing on the interface, which is the QoS Applied as part of the global policy, it will show the correct MTU:



N5K5(config-if)# show queuing interface eth1/6 | inc MTU
    q-size: 243200, HW MTU: 9280 (9216 configured)


So let's do a ping between our two Nexus 5k's enabled for this.

N5K8# ping 169.254.1.1 df-bit packet-size 9000
PING 169.254.1.1 (169.254.1.1): 9000 data bytes
Request 0 timed out
Request 1 timed out
Request 2 timed out
Request 3 timed out
Request 4 timed out

--- 169.254.1.1 ping statistics ---
5 packets transmitted, 0 packets received, 100.00% packet loss
N5K8# ping 169.254.1.1 df-bit packet-size 2000
PING 169.254.1.1 (169.254.1.1): 2000 data bytes
2008 bytes from 169.254.1.1: icmp_seq=0 ttl=254 time=1.523 ms



Hmmm.. I can do a certain size but not over a certain size, this could simply be some sort of control-plane policing on the nexus 5k, as if i ping THROUGH these devices, on my nexus 7k's, I can do up to 9000 bytes.


So long story short, Nexus 5k: change the system class, your done. Change the MTU on any L3 interfaces you want to use. Done.


MTU on SVI,

There is no Trick to this aspect, it is as simple as this: M1, SVI, Nexus 5k, Nexus 7k, none of that matters: You must set an MTU for layer 3 interfaces in order for those interfaces to support jumbo frames. Think of it like setting your operating system MTU

-- Subtopic: MTU interaction with routing protocols

I am not going to spend too much time on this, you should know if your doing CCIE DC that many routing protocols require you to match your MTU 



MTU on Port-Channels

On Nexus 7k, As you would expect if you add ports to a port-channel, the port-channel will inherit the MTU of the physical ports, or you can just change the MTU on the port-channel, either way you must configure the MTU on the port-channel even if it is layer 2 or layer 3.

On Nexus 5k, as we expect, the only thing that matters is System Class. As long as that allows jumbo MTU you are laughing.



MTU and VPC


With VPC on 7k the peer-link is always set to a Jumbo MTU, you cannot change this:

N7K7(config-if)# mtu 1500
ERROR: port-channel10: Cannot configure port MTU on Peer-Link.

There is even a bug:
CSCtf05232
From 4.2 NX-OS release onwards, VPC peer-link MTU is set to 9216 + padding.
ISSU SW upgrade form 4.1 to 4.2 will keep VPC peer-link MTU settings to prior 4.2 change and shut/no shut is needed for changes to take effect.



if you have a port-channel configured and are using it with vPC, on the Nexus 7k you must enable the MTU to be jumbo under the port-channel on both interfaces.

To test this, I configured a back to back vPC between a Nexus 5k set and a Nexus 7k set.

Here is the config:


N5k1:

 N5K7# show run | sect vpc|feature|port
feature lacp
feature vpc
feature lldp
vpc domain 2
  peer-keepalive destination 192.168.0.58
interface port-channel10
  switchport mode trunk
  switchport mode trunk
  spanning-tree port type network
  vpc peer-link
interface port-channel11
  speed 10000
  vpc 11




N5k2:

 N5K8# show run | sect vpc|feature|port
feature lacp
feature vpc
feature lldp
vpc domain 2
  peer-keepalive destination 192.168.0.57

interface port-channel10
  switchport mode trunk
  switchport mode trunk
  spanning-tree port type network
  vpc peer-link
interface port-channel11
  speed 10000
  vpc 11



N7k1:
N7K7# show run | sect vpc|feature|port-
feature interface-vlan
feature lacp
feature vpc
vpc domain 1
  peer-keepalive destination 169.254.99.8 source 169.254.99.7 vrf default
interface port-channel10
  switchport mode trunk
  spanning-tree port type network
  vpc peer-link
interface port-channel11
  switchport
  vpc 11



N7k2:

N7K8# show run | sect vpc|feature|port-
feature interface-vlan
feature lacp
feature vpc
vpc domain 1
  peer-keepalive destination 169.254.99.8 source 169.254.99.7 vrf default
interface port-channel10
  switchport mode trunk
  spanning-tree port type network
  vpc peer-link
interface port-channel11
  switchport
  vpc 11

!

In my above example, the two vPC peers could ping each other with jumbo frames (i.e. N7k1 could ping N7k2 with jumbo frames and N5k1 and N5k2 could ping each other with jumbo frames) BUT the 7k could not ping a 5k with jumbo frames, this is because the port-channel 11 (which is the one that connects the 5k and the 7k), was not configured for jumbo frames on the N7k side, it IS configured for jumbo on the 5k, because the 5k only uses the system class remember? so we need to set the 7k port-channel 11 to have jumbo mtu:

N7K8(config)# int po11
N7K8(config-if)# mtu 9216


(Done on BOTH N7K's)

Now let's check it out


N5K7# ping 169.254.1.3 df-bit packet-size 2000
PING 169.254.1.3 (169.254.1.3): 2000 data bytes
2008 bytes from 169.254.1.3: icmp_seq=0 ttl=254 time=1.418 ms
2008 bytes from 169.254.1.3: icmp_seq=1 ttl=254 time=1.081 ms
2008 bytes from 169.254.1.3: icmp_seq=2 ttl=254 time=1.086 ms
2008 bytes from 169.254.1.3: icmp_seq=3 ttl=254 time=0.971 ms
2008 bytes from 169.254.1.3: icmp_seq=4 ttl=254 time=1.09 ms

Done, all very logical, makes sense that a vPC port-channel would act just like any other port-channel, the only real take-away from this is that the peer-link always has an MTU set to jumbo and you can't change it.

MTU and FabricPath

I have covered fabricpath MTU in a previous topic but will cover it here for the sake of completeness. FabricPath uses an outer header that makes it slightly bigger than the normal ethernet header, however because it's actually NOT an ethernet frame and instead is a Fabricpath frame with an ethernet header encapsulated inside, if your linecards (F Line cards and Nexus 5000's) support it, this overhead does not need to be taken into account.

However, if you have a VLAN that you want to run Jumbo frames on and this VLAN is also fabricpath enabled, you need to specify the interfaces as jumbo which we will cover here.

Here is our basic FabricPath Config, super simple and easy:


install feature-set fabricpath
feature-set fabricpath
vlan 10

  mode fabricpath
int eth1/9 - 10
  switchport mode fabricpath
  switchport mode fabricpath
fabricpath domain default


We then configure vlan 10 interface so we can ping between the hosts.

Look at our first bit of output that is relevant:

N5K-p1-2# show fabricpath isis interface  brief
Fabricpath IS-IS domain: default
Interface    Type  Idx State        Circuit   MTU  Metric  Priority  Adjs/AdjsUp
--------------------------------------------------------------------------------
Ethernet1/9  P2P   1     Up/Ready   0x01/L1   1500 40      64          1/1
Ethernet1/10 P2P   2     Up/Ready   0x01/L1   1500 40      64          1/1


So right now our fabric path is only between two 5k's and as you can see the MTU is set to 1500.

Let's test with a ping.

N5K-p1-2(config-if)# show run int vlan 10
interface Vlan10
  no shutdown
  mtu 9216
  ip address 169.254.1.2/24


N5K-p1-1# ping 169.254.1.2 df-bit packet-size 1500
PING 169.254.1.2 (169.254.1.2): 1500 data bytes
Request 0 timed out
Request 1 timed out

No dice, let's try when we modify the default QoS Policy on the Nexus 5000.

N5K-p1-1(config-sys-qos)# show run | sect policy-map
policy-map type network-qos JUMBO
  class type network-qos class-default
    mtu 9216

N5K-p1-2(config-pmap-nq-c)# system qos
N5K-p1-2(config-sys-qos)# service-policy type network-qos JUMBO


Let's try a ping again:

N5K-p1-1# ping 169.254.1.2 df-bit packet-size 8972
PING 169.254.1.2 (169.254.1.2): 8972 data bytes
8980 bytes from 169.254.1.2: icmp_seq=0 ttl=254 time=3.328 ms
8980 bytes from 169.254.1.2: icmp_seq=1 ttl=254 time=4.898 ms
8980 bytes from 169.254.1.2: icmp_seq=2 ttl=254 time=4.958 ms
8980 bytes from 169.254.1.2: icmp_seq=3 ttl=254 time=19.405 ms
8980 bytes from 169.254.1.2: icmp_seq=4 ttl=254 time=3.161 ms

Jackpot, but note the show fabricpath isis interface output does not change:

N5K-p1-1# show fabricpath isis inter brief
Fabricpath IS-IS domain: default
Interface    Type  Idx State        Circuit   MTU  Metric  Priority  Adjs/AdjsUp
--------------------------------------------------------------------------------
Ethernet1/9  P2P   1     Up/Ready   0x01/L1   1500 40      64          1/1
Ethernet1/10 P2P   2     Up/Ready   0x01/L1   1500 40      64          1/1


We have kind of come to expect that though on the 5K so no worries.

Let's involve the 7k.

N7K-1-2# ping 169.254.1.1 df-bit packet-size 1500
PING 169.254.1.1 (169.254.1.1): 1500 data bytes
Request 0 timed out
Request 1 timed out
Request 2 timed out


--- 169.254.1.1 ping statistics ---
4 packets transmitted, 0 packets received, 100.00% packet loss
N7K-1-2# show fabricpath isis interf brief
Fabricpath IS-IS domain: default
Interface    Type  Idx State        Circuit   MTU  Metric  Priority  Adjs/AdjsUp
--------------------------------------------------------------------------------
Ethernet1/1  P2P   1     Up/Ready   0x01/L1   1500 40      64          1/1
Ethernet1/2  P2P   2     Up/Ready   0x01/L1   1500 40      64          1/1
Ethernet1/3  P2P   3     Up/Ready   0x01/L1   1500 40      64          1/1
Ethernet1/4  P2P   4     Up/Ready   0x01/L1   1500 40      64          1/1


No dice until we up the interface MTU manually (or change the system QOS globally)

N7K-1-2(config)# int eth1/1 - 8
N7K-1-2(config-if-range)# mtu 9216
N7K-1-2(config-if-range)# exit
N7K-1-2(config)# exit
N7K-1-2# ping 169.254.1.1 df-bit packet-size 1500
PING 169.254.1.1 (169.254.1.1): 1500 data bytes
1508 bytes from 169.254.1.1: icmp_seq=0 ttl=254 time=10.147 ms
1508 bytes from 169.254.1.1: icmp_seq=1 ttl=254 time=3.067 ms
1508 bytes from 169.254.1.1: icmp_seq=2 ttl=254 time=0.804 ms
1508 bytes from 169.254.1.1: icmp_seq=3 ttl=254 time=0.833 ms
1508 bytes from 169.254.1.1: icmp_seq=4 ttl=254 time=2.296 ms

--- 169.254.1.1 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 0.804/3.429/10.147 ms
N7K-1-2# ping 169.254.1.1 df-bit packet-size 8972
PING 169.254.1.1 (169.254.1.1): 8972 data bytes
8980 bytes from 169.254.1.1: icmp_seq=0 ttl=254 time=1.652 ms
8980 bytes from 169.254.1.1: icmp_seq=1 ttl=254 time=6.884 ms
8980 bytes from 169.254.1.1: icmp_seq=2 ttl=254 time=6.992 ms
8980 bytes from 169.254.1.1: icmp_seq=3 ttl=254 time=7.053 ms
8980 bytes from 169.254.1.1: icmp_seq=4 ttl=254 time=15.796 ms

Done and done.



MTU on OTV

To test this one we used the most simple of configs to show you how it is done.
N7k1:

interface Overlay1
  otv join-interface Ethernet1/25
  otv extend-vlan 10
  otv use-adjacency-server 169.254.2.1 unicast-only
  no shutdown

!



N7k2:
interface Overlay1
  otv join-interface Ethernet1/25
  otv extend-vlan 10

  otv adjacency-server
  otv use-adjacency-server 169.254.2.2 unicast-only
  no shutdown

!



There is not much special about the config, and you will see that the default packet size you can send is shown below:


N5K7# ping 169.254.1.2 df-bit packet-size 1430
PING 169.254.1.2 (169.254.1.2): 1430 data bytes
1438 bytes from 169.254.1.2: icmp_seq=0 ttl=254 time=1.542 ms
1438 bytes from 169.254.1.2: icmp_seq=1 ttl=254 time=1.441 ms
1438 bytes from 169.254.1.2: icmp_seq=2 ttl=254 time=1.187 ms
1438 bytes from 169.254.1.2: icmp_seq=3 ttl=254 time=1.168 ms
1438 bytes from 169.254.1.2: icmp_seq=4 ttl=254 time=1.157 ms


The maximum you can do is 1430, this is because OTV adds 42 bytes of overhead on a typical IP frame, so if 1472 if your maximum, take away 42, that gives you 1430.

Let's try and configure some jumbo MTU's and see how much bigger we can get them to go.

First thing would be to enable Jumbo frames on the SVI's, Duh!


N5K7(config)# int vlan 10
N5K7(config-if)# mtu 9216


Still no dice, next we enable system qos class on the 5k's:

N5K7(config)# policy-map type network-qos JUMBO
N5K7(config-pmap-nq)# class type network-qos class-default
N5K7(config-pmap-nq-c)# mtu 9216
N5K7(config-pmap-nq-c)# exit
N5K7(config-pmap-nq)# system qos
N5K7(config-sys-qos)# service-policy type network-qos JUMBO



Still no dice,


However, if we enable our 7k join interface and M interface for MTU:

int eth2/27
(Interface towards 5k)
mtu 9216
int eth1/25
mtu 9216
!

Everything works as expected and we can send jumbo MTU.



N5K7# ping 169.254.1.2 df-bit packet-size 2173
PING 169.254.1.2 (169.254.1.2): 2173 data bytes
2181 bytes from 169.254.1.2: icmp_seq=0 ttl=254 time=1.825 ms
2181 bytes from 169.254.1.2: icmp_seq=1 ttl=254 time=1.389 ms
2181 bytes from 169.254.1.2: icmp_seq=2 ttl=254 time=1.213 ms
2181 bytes from 169.254.1.2: icmp_seq=3 ttl=254 time=1.344 ms
2181 bytes from 169.254.1.2: icmp_seq=4 ttl=254 time=1.358 ms




MTU on FC

As per the brilliant blog post by Tony Burke over at the Data Centre overlords:
http://datacenteroverlords.com/2013/04/01/jumbo-fc-frames/

You can modify MTU on a per-VSAN basis, and the MTU will be negotiated between the devices during FLOGI.


MTU and FCIP


http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/sw/5_0/configuration/guides/ipsvc/fm/cfcip.html





MTU on UCS

So, I can tell you now definitively as I have tested it again and again and I know for sure:

So, The most important thing, is the System QOS

This does not suprise me again: The Fabric Interconnect is based off the Nexus 5000, so it makes sense it would work in a similar manner.

The System QOS Class is  the most important value to set: if you do not set this to support your jumbo frames, no matter what you set on the OS, no matter what you set on the vNIC (and in fact you won't be able to set ANYTHING on the VNIC that is higher than the system class) it won't work unless you have set the system class also.

That is found here:



You MUST change these values to 9216 or whatever jumbo MTU you want, this is key, without this you will get NOWHERE!


OK now we have that cleared up, and we have changed the value:



We need to know what this setting, under the vNIC, does here:

 

 (see highlighted section)

As per Jeff Said So's blog post on the topic: http://jeffsaidso.com/2012/04/cisco-ucs-mtu-sizing-with-vic/

This value is STRICTLY used to INFORM the OS that the network card supports this particular MTU value, if your OS supports auto detection of the network card MTU, you therefore do not have to manually set the MTU in the operating system, which saves time obviously.

Now, _IF_ you set the MTU value manually in the OS, As i will show you below:





 but you leave this value at 1500, as I have done in the below screenshot:





 Guess what? You can get jumbo frames:




So!!! This value in the vNIC is used for one reason and one reason only: to inform the OS that THIS is the available MTU, and _IF_ the OS supports it, it will set the adapter to the appropriate MTU, if you either forget to set this value, OR your OS does not support it, if you change the value in the OS MANUALLY, you will still have jumbo frames as long as the system class is configured.

My Final advice? In the real world Set the jumbo MTU on the adapter, it can't hurt if your OS supports the auto detection, and if your OS does not, well you just need to set the MTU manually.

This explains why the MTU you set on a vNIC does not come up under the actual vethernet config under UCS: It's just a recommendation to the operating system. I would be interested to see what changing this value does on NON Cisco VIC cards, and in actual fact can you even change this value? Any thoughts on this please make a comment.

So, this value is PURELY a suggestion to the OS, it is NOT enforced, but you might as well set it anyway.

The reason this only has a maximum value of 9000, even though you configured 9216 in the system-class, is because most operating system's only understand a value of 9000.


MTU on Nexus 1000v


Luckily this has become extremely simple since 4.0, there IS a system jumbo MTU command, but it does nothing, it cannot be changed and cannot be removed:
 


 If you try and change it:





Only valid options are to set it to 9000.



alright, so how do I set my jumbo MTU?


Well, you ONLY have to set it on your uplink interfaces, it works BY DEFAULT on all your vethernet interfaces

See below:



So you don't have to set anything, if you try and configure an MTU on a vethernet, the option will simply not be there:





You can only set it on the uplink interface, and this is the only place you need to set it:










Popular old posts.