Hi Guys!
In this post I am going to assume you already know the basics of FCIP, and we will spend most of our time talking about the more advanced aspects.
We will have fun performing benchmarks etc to show the speed that FCIP is capable of :)
Below is a screenshot of the roadkil Disk Speed result for our drive connected via standard FC, as you can probably tell it's not the fastest disk in the world ;) but will do for our purposes:
OK before we go any further let's get a benchmark and discuss our topology
Two switches, MDS 1 and 2, with one of those switches hosting the target, the other switch hosting the initiator, for the purpose of our test we will be connecting these switches back to back then running FCIP over them, so that the initiator has to go to switch 1, over the FCIP link to switch 2, to the actual target itself.
As always, most of what I am showing you here I learnt from Cisco Doco
http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/sw/4_1/configuration/guides/cli_4_1/cfcip.html#wp2395482
Let's get the basic config going and go from there
So the first thing we will do is get the FCIP link up, then we will tune some TCP parameters that we can play with, you will see that Cisco have spent a long time trying to optimize the TCP connections for FCIP, a lot of what they do (Selective ACK's, path MTU, aggressive window sizing) is exactly the same kind of thing there Cisco WAAS (and indeed all WAN accelerators) do
Switch 1:
interface GigabitEthernet1/1
ip address 169.254.1.1 255.255.255.252
no shutdown
feature fcip
fcip profile 1
ip address 169.254.1.1
!
I then create my FCIP interface:
int fcip1
use-profile 1
peer-info ipaddr 169.254.1.2
!
You obviously configure the opposite on the switch on the other side, the FCIP interface needs to be no-shut, then the link will come up:
Switch2(config-if)# show int fcip1
fcip1 is trunking
Trunk vsans (admin allowed and active) (1)
Trunk vsans (up) (1)
Trunk vsans (isolated) ()
Trunk vsans (initializing) ()
Interface last changed at Fri Jun 21 15:49:31 2013
Using Profile id 1 (interface GigabitEthernet1/1)
Peer Information
Peer Internet address is 169.254.1.1 and port is 3225
Write acceleration mode is configured off
Tape acceleration mode is configured off
Tape Accelerator flow control buffer size is automatic
FICON XRC Accelerator is configured off
Ficon Tape acceleration configured off for all vsans
IP Compression is disabled
Maximum number of TCP connections is 2
QOS control code point is 0
QOS data code point is 0
If we want, we can modify the QoS values that will get used under the interface:
Switch2(config-if)# qos control ?
0-63 QOS control code point value
Switch2(config-if)# qos control 24 ?
data Configure qos for control and data packets
Switch2(config-if)# qos control 24 data ?
0-63 QOS data code point value
Switch2(config-if)# qos control 24 data 23 ?
CR
Switch2(config-if)# qos control 24 data 23
Let's keep looking
TCP Connection Information
2 Active TCP connections
Control connection: Local 169.254.1.2:3225, Remote 169.254.1.1:65500
Data connection: Local 169.254.1.2:3225, Remote 169.254.1.1:65502
6 Attempts for active connections, 0 close of connections
FCIP uses two TCP Connections, one for data, one for the control (switch to switch) traffic
TCP Parameters
Path MTU 1500 bytes
Current retransmission timeout is 200 ms
FCIP will discover Path MTU as part of its negotiation, the current retransmission timeout is another useful tcp paramter that specifies how long after sending a transmission and not receiving an ACK should TCP wait before transmitting again (this helps improve response times over WAN links)
Round trip time: Smoothed 10 ms, Variance: 5 Jitter: 150 us
Advertized window: Current: 40 KB, Maximum: 40 KB, Scale: 5
Peer receive window: Current: 34 KB, Maximum: 34 KB, Scale: 5
Congestion window: Current: 14 KB, Slow start threshold: 112 KB Current Send Buffer Size: 40 KB, Requested Send Buffer Size: 0 KB
CWM Burst Size: 50 KB
Measured RTT : 0 us Min RTT: 0 us Max RTT: 0 us
All of the TCP windowing is hugely important and controlled when you specify a bandwidth parameter, FCIP uses your bandwidth parameters along with the measured RTT to decide how big to make the Window, this is why getting those parameters correct is quite important, too high values will starve your other traffic as FCIP will aggresively set the window size. CWM stands for Congestion Window Management and is used to control how the window size responds to congestion.
OK before we go any further and start tuning the parameters, let's get another benchmark, bare in mind this is now instead of going over a 2Gbps FC, is going over a 1 Gig Ethernet Connection
(Test run twice)
Oh dear, things have gotten a little worse ;) let's try increasing the MTU size
interface GigabitEthernet1/1
ip address 169.254.1.1 255.255.255.252
switchport mtu 8000
(Done on both ends)
We can see that path MTU has been automatically discovered:
Switch1# show int fcip1 | inc MTU
Path MTU 8000 bytes
Awesome, let's test
As you can see it's clearly an improvement, especially when we start dealing with larger block sizes.
Let's see if we can get any further improvement by modifying our TCP parameters, this is a gigabit link and it's absolutely dedicated to FCIP, so let's tell the link that, and let's also measure the round trip delay with the handy dandy command shown below:
Switch1# ips measure-rtt 169.254.1.2 interface gigabitethernet 1/1
Round trip time is 62 micro seconds (0.06 milli seconds)Switch1# ips measure-rtt 169.254.1.2 interface gigabitethernet 1/1
Round trip time is 63 micro seconds (0.06 milli seconds)
Switch1# ips measure-rtt 169.254.1.2 interface gigabitethernet 1/1
Round trip time is 63 micro seconds (0.06 milli seconds)
Switch1# ips measure-rtt 169.254.1.2 interface gigabitethernet 1/1
Round trip time is 62 micro seconds (0.06 milli seconds)
Switch1# ips measure-rtt 169.254.1.2 interface gigabitethernet 1/1
Round trip time is 63 micro seconds (0.06 milli seconds)
Switch1# ips measure-rtt 169.254.1.2 interface gigabitethernet 1/1 ?
Switch1# ips measure-rtt 169.254.1.2 interface gigabitethernet 1/1
Round trip time is 63 micro seconds (0.06 milli seconds)
Switch1# ips measure-rtt 169.254.1.2 interface gigabitethernet 1/1
Round trip time is 63 micro seconds (0.06 milli seconds)
Switch1#
I am noticing a pattern here ;) 63 microseconds
Let's do it:
fcip profile 1
tcp max-bandwidth-mbps 1000 min-available-bandwidth-mbps 990 round-trip-time-
ms 63
Set this on both ends, let's see what it has done to our show int fcip command
show int fcip1
- output omitted -
TCP Parameters
Path MTU 8000 bytes
Current retransmission timeout is 200 ms
Round trip time: Smoothed 2 ms, Variance: 1 Jitter: 182 us
Advertized window: Current: 27 KB, Maximum: 26 KB, Scale: 11
Peer receive window: Current: 24 KB, Maximum: 24 KB, Scale: 4
Congestion window: Current: 28 KB, Slow start threshold: 6853 KB
Current Send Buffer Size: 26 KB, Requested Send Buffer Size: 0 KB
CWM Burst Size: 50 KB
Measured RTT : 0 us Min RTT: 0 us Max RTT: 0 us
As you can see from the above, access times have gone down, and our overall score is the highest it has been so far!
OK, now that this is all hopefully a bit clearer, let's think about what is probably restricting us the most here: the most restrictive aspect is bound to be the bandwidth, at only 1 Gig a second, we know that the disks are capable of more as we proved in our fibre-channel benchmark we did at the start, now we can actually etherchannel in two diffirent ways, we can etherchannel the two ethernet ports together to form a single FCIP link over the top of that, OR we can create a SAN Port channel with our two FCIP Interfaces!, OR we can rely on FSPF to perform the load balancing for us
Awesome hey?
Let's start with the easiest which is just adding another FCIP Interface on the other gigabit interface and letting the built-in load balancing for FC sort us out:
Switch1# show vsan 1
vsan 1 information
name:VSAN0001 state:active
interoperability mode:default
loadbalancing:src-id/dst-id/oxid
operational state:up
Nice! We use the exchange-ID for load balance, OK let's get this show on the road!
fcip profile 2
ip address 169.254.2.1
interface fcip2
use-profile 2
peer-info ipaddr 169.254.2.2
no shutdown
interface GigabitEthernet1/2
ip address 169.254.2.1 255.255.255.252
switchport mtu 8000
no shutdown
Get the interface up and trunking, and ensure that FCSP is gona load balance:
Switch2(config-if)# show fspf internal route vsan 1
FSPF Unicast Routes
---------------------------
VSAN Number Dest Domain Route Cost Next hops
-----------------------------------------------
1 0x01(1) 10 fcip1
fcip2
Awesome, let's do a disk test!
The results are the same? Yep they sure are, I am afraid to say they will be because the Exchange ID will be exactly the same, meaning one link is going to sit there idle, the other is going to be busy working away.
:(
Let;s see if the results differ using a SAN Port channel
interface fcip1
channel-group 1 force
FSPF Unicast Routes
---------------------------
VSAN Number Dest Domain Route Cost Next hops
-----------------------------------------------
1 0x01(1) 500 port-channel 1
All in all not much of an improvement, but at least I am showing you the config options we have with port-channels ;), as I mentioned these WOULD improve if the exchange ID differed, but because the Exchange-ID remains the same, we won't see an improvement here.
I would love to show you how we can do this on an ethernet interface, but Unfortunately I cannot as my MDS switches won't me do ethernet as a port-channel :(
So instead, let's look at even more advanced FCIP options, I want to make this baby run like the wind!
Let's go back to one port, like we are used to:
Switch2# show fspf internal route vsan 1
FSPF Unicast Routes
---------------------------
VSAN Number Dest Domain Route Cost Next hops
-----------------------------------------------
1 0x01(1) 10 fcip2
MTU is still 8000
Switch2# show int fcip2 | inc MTU
Path MTU 8000 bytes
Before we enable the advanced feature write-acceleration, let's get a benchmark
OK now let's chat about write acceleration, this is a feature that can be enabled on FCIP to allow you to proxy TRANSFER_READY commands, so that the local device starts transferring without waiting to receive an acknowledgement from the remote side, you have to be very careful not to combine this FCIP link with another link not enabled for write-acceleration, otherwise frames may come out of order!
The picture below from Cisco explains it better than I can:
To all the WAAS fans out there, this probably looks very Familiar ;)
OK let's look at how to configure.
On each peer, just enable it under the FCIP interface:
interface fcip2
write-accelerator
show int fcip2
fcip2 is trunking (Not all VSANs UP on the trunk)
- output omitted
Write acceleration mode is configured on; operationally on
Let's look at a benchmark now!
Nice improvements!
OK, let's look at a few more FCIP Options
Next let's try tape-acceleration, which essentially works very similiar to write-acceleration and infact uses write-acceleration, but also adds some improvements to read performance by caching the information at the remote end, this is designed to greatly improve sequential speeds.
Here we did not notice much of an improvement, quite possibly because the tape acceleration is really made for WAN links, when we are using a back to back link, therefore the latency is not an issue, so this acceleration is probably slowing us down more than helping us out in this particular scenario.
The last option we have in FCIP to configure is compression, let's take a look at that
This is configured under the interface and will vary depending on your hardware platform and OS release as to what you can support
Switch1(config-if)# ip-compression ?
auto Auto compression setting
mode1 Fast compression for high bandwidth links
mode2 Moderate compression for medium bandwidth links
mode3 High compression for low bandwidth links
The auto mode will select the most appropriate compression based on your hw and software
As you can see this made our results a little less predictable, our 512k block size speed is excellent, but overall we are slower than usual, it probably does not help that we are using random blocks here so the compression is not as effective ;)
Switch2# show ips stats hw-comp interface gigabitethernet 1/2
HW Compression statistics for port GigabitEthernet1/2
Compression stats
6646460828 input bytes, 3194897721 output compressed bytes
1659500 input pkts, 1659500 output compressed pkts
Decompression stats
912056556 input compressed bytes, 1106770484 output bytes
462369 input compressed pkts, 462369 output pkts
Passthru stats
0 input bytes, 0 output bytes
0 input pkts, 0 output pkts
Miscelleneous stats
32 min input pktlen, 8958 max input pktlen
28 min output pktlen, 9588 max output pktlen
0 len mismatch, 0 incomplete processing
0 invalid result, 0 invalid session drop
370428 comp expanded
Errors stats
0 decomp tx error, 0 post comp error
0 post decomp error, 4238738688 comp packets expanded
I hope you have enjoyed the expouse on FCIP, I am left to do iSCSI and then I am finished with storage topics, I will be studying ACE tomorrow, then the following week UCS and Nexus 1000v!
Subscribe to:
Post Comments (Atom)
Popular old posts.
-
Hi Guys Having spent a lot of time with customers working on vPC deployments, I have found quite a few of the gotcha's for vPC that I w...
-
Hi Guys! This blog post is attempting to be the DEFINITIVE guide on Jumbo MTU, It's a topic that DOES MY HEAD IN! There are SO many ...
-
So some of the readers of this blog might already know this little trick, and what's more some of you might be surprised I didn't kn...
Hello,
ReplyDeleteWe would like to use MDS9222I between two datacenters and realised synchronous copy with FCIP link.
Datacenters are 7 kms (5miles) away.
Thanks,
Regards.
I´m trying to read as many of your posts as I can and let me tell you that I enjoy reading many of them. Great way to solidify your learning.
ReplyDeleteTrung tâm đào tạo kế toán Tại cầu giấy
ReplyDeleteTrung tâm đào tạo kế toán Tại từ liêm
Trung tâm đào tạo kế toán Tại thanh xuân
Trung tâm đào tạo kế toán Tại hà đông
Trung tâm đào tạo kế toán Tại long biên
Trung tâm đào tạo kế toán Tại nguyễn chính thanh đống đa
Trung tâm đào tạo kế toán Tại minh khai hai bà trưng
Trung tâm đào tạo kế toán Tại bắc ninh
Trung tâm đào tạo kế toán Tại hải phòng
Trung tâm đào tạo kế toán Tại tphcm
Trung tâm đào tạo kế toán Tại quận 3
Trung tâm đào tạo kế toán Tại thủ đức
Trung tâm đào tạo kế toán Tại đà nẵng
Trung tâm đào tạo kế toán Tại biên hòa
Trung tâm đào tạo kế toán Tại đồng nai
Trung tâm đào tạo kế toán Tại nam định
Trung tâm đào tạo kế toán Tại thái bình
Trung tâm đào tạo kế toán Tại bắc giang
Trung tâm đào tạo kế toán Tại vĩnh phúc
Trung tâm đào tạo kế toán Tại thái nguyên
Trung tâm đào tạo kế toán Tại quảng ninh
Trung tâm đào tạo kế toán Tại hải dương
Trung tâm đào tạo kế toán Tại hưng yên
Trung tâm đào tạo kế toán Tại hà nam
Trung tâm đào tạo kế toán Tại ninh bình
Trung tâm đào tạo kế toán Tại nghệ an
Trung tâm đào tạo kế toán Tại vũng tàu
trung tam ke toan tai binh duong