Child pages
  • 2009-2010 Univ of Utah Campus to external sites Network Performance Troubleshooting
28 more child pages
Skip to end of metadata
Go to start of metadata
Table of Contents

Overview


Researchers at the University of Utah are seeing very poor performance to remote labs and other sites. The Center for High Performance Computing has setup perfSONAR boxes within the campus, at the border of the Utah Education Network and at the Level 3 PoP to attempt to isolate and troubleshoot the issue. Two issues exist: the U WAN firewall limits flows between two machines to approx. 500-600Mb/s; the UEN border to Salt Lake I2 router seems to be exhibiting issues.

Univ. of Utah Campus Helpdesk Altiris ticket 122413, ,262433

Methodology

Our methodology was to use NDT for some comparison tests, but, to use bwctl for most tests. The bwctl program from the perfsonar package allows 3rd party bandwidth tests. In other words, by using perfsonar 3.1.1, one can issue a command like:

./bwctl -s bwctl.salt.net.internet2.edu -c bwctl.kans.net.internet2.edu

to obtain bandwidth results from the Salt Lake City I2 PoP to the Kansas I2 PoP. Note that both NDT and bwctl use iperf under the hood, though, bwctl can use nuttcp and thrurelay. The approach similar.

To give us a good baseline to our campus performance measurements, we did a complete sweep of the I2 backbone from the I2 Salt Lake City (Level3) PoP. The attached text file yields the detailed results from yesterday. The summary result is that from the Salt Lake City PoP to any other PoP on the I2 network, the minimum bandwidth was 885Mb/s in 10s test bursts; most exceeded 933Mb/s. The I2 Salt Lake PoP also allowed us to connect to ES-Net and the NCAR site at these speeds. From the Los Angeles I2 PoP, we were even able to sustain 4Gb/s-6.4Gb/s flows to sites that had 10G enabled perfsonar boxes. (Note: The SLC I2 node only has Gig at its site.) We performed each of our node tests 3 times to prove results were reproducible and not single anomalies.

We then repeated this same sweep using bandwidth.chpc.utah.edu, the CHPC personar3.1.1 box which has a 10Gig interface. The results were highly variable, ranging from 36.9 Mb/s (Houston) to 360Mb/s at Kansas. To ES-Net (home of many of the labs), we saw 37.2Mb/s to 65.2Mb/s. This sweep yielded better results than in the past and better than the NDT results of the day previous, but, the variability of the results is inconclusive. The constant observation is that from the U, we are receiving dramatically poorer results compared to tests issued immediately outside of UEN.

Test Machines

Internet2

UEN/CHPC

NDT http://www.internet2.edu/performance/ndt/ndt-server-list.html

BWCTL from perfsonar
PerfSonar Performance Toolkit http://software.internet2.edu/pS-Performance_Toolkit/

U.S. Dept of Energy ESnet Network Performance Knowledge Base http://fasterdata.es.net/

Internet2 perfSONAR Software http://software.internet2.edu/pS-Performance_Toolkit/

Internet2 perfSONAR Documentation http://code.google.com/p/perfsonar-ps/wiki/pSPerformanceToolkit31

Perfsonar Main site http://www.perfsonar.net/

Internet2 perfSONAR Global Service and Data View
BWCTL: http://dc211.internet2.edu/cgi-bin/perfSONAR/serviceList.cgi#BWCTL

ESnet perfSONAR Global Service and Data View
BWCTL: http://stats1.es.net/cgi-bin/directory.cgi

Tasks


Description of Tasks and Next steps as they evolve in this process. Trying to keep thoughts documented in cohesive manner.
Tasks and Next Steps

Documented Tests and Events


2011-01-04 Emulab tests to CHPC and other sites

During maintenance on Dec. , the Emulab group found that the uplink switch to the dl710 nodes plugged into a 100Mb/s port. They moved it to the correct Gig port. I am now re-iterating tests to see what performance I can obtain.

2011-01-04 Emulab tests to CHPC and Internet2 Houston PoP

2010-12-13 Firewall Network Performance meeting and test results

2010-12-13 Firewall Network Performance meeting and test results (Altiris Helpdesk ticket 262433)

2010-12-07 Snapshots of IPv6 and IPv4 graphs through the Univ of Utah firewall

We have found that IPv6 and IPv4 graphs are different through the campus WAN firewall. The following link shows snapshots of the graphs as of 12-07-2010.

2010-12-07 Snapshots of IPv6 and IPv4 graphs through the Univ of Utah firewall

2010-11-29 Firewall Network Performance meeting and test results

Ken Kizer, Tim Urban, John Koolhoven and Joe Breen met to go over action items, look at results of changes and perform some tests. One major thing that popped out was to note the difference between a test to the I2 Houston PoP via IPv6 and IPv4.
2010-11-29 Firewall Network Performance meeting and test results (Altiris Helpdesk ticket 262433)

2010-11-26 Network Performance tests between uv.sci.utah.edu and bandwidth.chpc.utah.edu

In order to gain a different perspective from around campus, the staff at SCI enabled access to run performance tests on a 10Gig enabled box.
2010-11-26 Network Performance tests between uv.sci.utah.edu and bandwidth.chpc.utah.edu

2010-11-19 Campus MTU troubleshooting and Network Performance tests from Emulab/FLUX

2010-11-19 Campus MTU troubleshooting and Network Performance tests from Emulab

2010-11-18 Univ of Utah local tests

2010-11-18 Univ of Utah local backbone tests

2010-11-17

2010-11-16 Setup of cross-correlation sites and virtual measurement machine with new rev of perfSONAR

Contacted Rob Ricci of Emulab/Flux group. He helped me setup an Emulab account so that I could run some tests from machines that bypassed the campus firewall. After wading through documentation and trading emails with Rob, I was able to use the following steps to setup a test node:

  • Goto http://www.emulab.net -> login ->
  • Goto Experimentation (drop down at top of page after login) -> Begin Experiment -> New GUI editor
  • Drag a node onto page
  • Click on ... by OS in Node Properties->Software and choose CENTOS55-STD
  • Click on Node Properties Hardware Text box and type d710
  • Choose File -> Create New Experiment
  • Type initialtest (or some other non-space separated name) for name
  • Type a description
  • Click on "swap in immediately" check box
  • Click on create button
  • Look at http://www.emulab.net -> View Activity Log file to determine host in use
  • ssh to <hostname>.emulab.net
  • wget http://software.internet2.edu/sources/bwctl/bwctl-1.3.tar.gz
  • configure with appropriate --prefix= flag.

I also attempted to contact Nick Rathke to setup an account at SCI . They have 10Gig infrastructure. I want to cross-correlate data points.

Setting up Centos 5.5 virtual machine so that I can setup latest perfSONAR software. Upgrades took quite a while.

Also looking at the latest perfSONAR Performance Toolkit 3.2. I tried to get it to run in a virtual machine but was unable.

2010-11-15 Firewall Network Performance meeting (Altiris Helpdesk ticket 262433)

Met with Ken Kizer, Tim Urban and John Koolhoven, all of UIT, to discuss next steps for troubleshooting firewall network performance.

h2. 2:00pm-3:30pm:  Firewall network performance troubleshooting

h3. Attendees
Ken Kizer, John Koolhoven, Tim Urban, Joe Breen

h3. Notes

Network connections - 

Use Algosec to troubleshoot performance of ACLs.
fwsm command: sho mp 3 acl stats

currently 10k ACLs, Jonzy wants 4800 more ACLs for outbound, 

KK: Find what rule and 

JK: is latency really the issue
KK: other interfaces having errors?

By Nov 29th
<<KK>> KK optimize firewall - find a firewall rule that Joe's traffic is hitting
<<JK>> Check all the router interfaces 
<<KK>> Sniff traffic from WAN-NAM Cisco appliance
<<JB>> Setup tests on firewall bypass from Rob Ricci/Flux 
<<JB>> Need to open another ticket
Later...
<<>> run bwctl , Spirent context through another FWSM?
<<>> Put ruleset snapshot on lab and do baseline
<<>> TCP accelerated path on firewall

Tim has Baseline numbers on FWSM and saw several gig of traffic;  

KK: put CHPC traffic on FWSM-2?  Tim: probably not feasible

JK: Have WAN-NAM Cisco appliance

JK: might be some issues with out of sequence transmissions/dropped packets; TU: firewall bypass?  JB: need to test;  



2010-10-29 UEN removal of router in I2 path

UEN removed a router that formerly terminated the Internet2 connection. Basically, UEN dropped 1 hop out.

2010-10-15 Campus increased WAN Border firewall MTU and upgraded the FWSM code (Altiris helpdesk ticket 122413)

Campus increased the Univ. of Utah WAN Border firewall #1 MTU from 1500 to 8500 Bytes. They performed this change on both firewalls. They also upgraded the firewall at this time. The firewall now supports IPv6 rules where previously, it only supported IPv6 via "transparent" mode. They have opened the IPv6 rule-set to allow any any for the networks on which CHPC operates. The MTU change was part of Campus Altiris Helpdesk ticket: 122413

2010-08-23 FDT tests from TACC to EBC

Trying FDT uploads to TACC. The other tests from the weekend were all downloads. I had to move the data first to the 2 respective boxes because I had deleted it.
2010-08-23 FDT sample tests from EBC to TACC (Joe Breen)

2010-08-22 FDT tests from TACC to EBC

The following tests utilized the CERN project FDT to examine file transfers vs just bwctl/iperf.
2010-08-22 FDT tests from TACC to EBC (Joe Breen)
2010-08-22 FDT tests from TACC to INSCC (Joe Breen)

2010-08-15 bwctl tests from TACC to EBC and CHPC

2010-08-15 bwctl tests between TACC and EBC (Joe Breen)
2010-08-15 bwctl tests between TACC and CHPC (Joe Breen)

2010-08-06 bwctl tests between EBC perfSONAR and Level 3 PoP perfSONAR

2010-08-06 bwctl tests between EBC perfSONAR and Level 3 PoP perfSONAR (Joe Breen)

2010-08-05 UEN Tests of Fiber between Level 3 PoP and EBC

Layer 1 and 2 tests showed no drops between the Level 3 PoP and EBC. The tests UEN ran were over the fiber pair that connects the two sites.

2010-08-02 Troubleshooting issues with tests to 205.122.102.10

Troubleshooting the one way test issues to 205.122.102.10. From 64.57.28.186, I noticed that I was not able to ping 205.122.102.10 past 8968Bytes with no Defrag set. The MTU was 9k on both end hosts. Strangely, I would receive no feedback at all.

[chpcadmin@Knoppix ~]$ ping -M do -s 8969 205.122.102.10
PING 205.122.102.10 (205.122.102.10) 8969(8997) bytes of data.
^C
--- 205.122.102.10 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

However, I was able to ping bwctl.salt.net.internet2 at 8972 Bytes with no issue. If I went to 8973 Bytes, I would receive the error:

From 64.57.28.186 icmp_seq=1 Frag needed and DF set (mtu = 9000)
From 64.57.28.186 icmp_seq=1 Frag needed and DF set (mtu = 9000)
From 64.57.28.186 icmp_seq=1 Frag needed and DF set (mtu = 9000)
From 64.57.28.186 icmp_seq=1 Frag needed and DF set (mtu = 9000)
From 64.57.28.186 icmp_seq=1 Frag needed and DF set (mtu = 9000)
From 64.57.28.186 icmp_seq=1 Frag needed and DF set (mtu = 9000)
From 64.57.28.186 icmp_seq=1 Frag needed and DF set (mtu = 9000)

The issue is that I receive no results from bwctl using iperf under the hood.

bwctl: exec_line: /usr/local/bin/iperf -B 205.122.102.10 -s -f m -m -p 5008 -t 10
bwctl: start_tool: 3489767152.094435
------------------------------------------------------------
Server listening on TCP port 5008
Binding to local address 205.122.102.10
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
[ 14] local 205.122.102.10 port 5008 connected with 64.57.17.214 port 5008
bwctl: stop_exec: 3489767167.318130

RECEIVER END

When I ping from 205.122.102.10, I receive the error:

From 205.122.102.10 icmp_seq=2 Frag needed and DF set (mtu = 8996)
From 205.122.102.10 icmp_seq=2 Frag needed and DF set (mtu = 8996)
From 205.122.102.10 icmp_seq=2 Frag needed and DF set (mtu = 8996)
From 205.122.102.10 icmp_seq=2 Frag needed and DF set (mtu = 8996)

Where are the other 4 bytes? MPLS tunnel overhead screwing up things?

I called the UEN NOC. Shawn Lyons helped to reconfigure the UEN router interface from 9000 Bytes to 9216 Bytes. This change made no difference.

I verified that a single MPLS label is a fixed 4bytes. Coincidence? The perfSONAR box is hanging directly off a port of the UEN PE switch which also terminates MPLS tunnels across the UEN core. Shawn Lyons of UEN went through and verified all interfaces between the Level3 PoP and the PE device handing off to the UofU and to the perfSONAR measurement device. He was able to ping with 9004 Bytes from the Level 3 7600 to the UEN-UofU PE device. I talked with Kevin Quire and he has seen some oddities like this effect previously due to VLAN headers and MPLS. However, I should be on a straight routed interface. Kevin is checking the tunnels, etc. I can mitigate the issue by changing the MTU on the host to be 8950Bytes or something similar, but, the mitigation doesn't reveal the issue. We need to understand things in more depth so that we have proper roll-out settings for a larger deployment.

In order to mitigate the test issues, I changed the MTU on the host to 8965 Bytes. I will continue to work with Kevin Quire to find out regarding the other 4 bytes.

2010-08-02 Testing of new load balance algorithm

2010-08-02 Test of firewall to switch Etherchannel algorithm (Joe Breen)

2010-07-31 Change of campus algorithm for Etherchannel bond to firewall

9pm. Tim Urban changed the Etherchannel bond for the WAN switches that hold the firewalls. Tim was able to change the bonding algorithm to balance utilizing layer 4 properties in the TCP header.

2010-07-13 Change of MTU on switch interface to perfsonar-uen.chpc.utah.edu 205.122.102.10

Aaron Brown noticed that the MTU mismatched. The host was at 9000B but somewhere beyond was fragmenting packets. Joe B. called the UEN NOC and had them look at the interface. The switch interface was at 1500B. They changed the MTU. Packets were still fragmenting until Joe B. rebooted the box. The UEN NOC went through all the interfaces up to Internet2 and verified all other interfaces.

The change in the MTU and the ensuing reboots enabled the tests to start functioning.

2010-07-08 UEN test of fiber to I2

For the past several weeks UEN has been attempting to test the fiber to I2. However, unforeseen issues have arisen on the previous tries. They were successful on 07-08-2010 at 11pm. The physical team found that one of the fibers at the Cisco 6500 terminating NLR and I2 was really dirty and caused a 5dB loss. They were unable to test just the fiber between EBC and Level3 because they needed different optics. They were able to test by putting their test equipment on either side of the routers and pushing through the data.

Joe B. needs to run new tests but the 205.122.102.10 device still has its bwctl processes hanging. Joe B. talked to Aaron Brown, Jason Z. and Jeff Boote last week during the Network Performance Workshop. Joe B. set up a username/pw for troubleshooting purposes. Aaron was looking at the boxes to see if anything obvious was amiss and if anything might affect future installations.

2010-06-10 UEN EBC perfSONAR node crash and UEN Level3 perfSONAR Graph snapshots

The UEN EBC node crashed while attempting to get graph snapshots. The services were not running. Joe B. attempted a reboot. The device apparently completely lost all its configurations on reboot. Joe was trying to grab graphs before the testing that was to happen at Level3 on the fiber and the layer 2 infrastructure.

2010-06-10 UEN Level 3 perfSONAR graph snapshots

2010-06-02 UDP transfer tests between UEN EBC and UEN Level3 Suite

2010-06-02 UDP transfer tests between UEN EBC and UEN Level3 (Joe Breen)
2010-06-02 UDP transfer tests between UEN Level3 and Internet2 Salt Lake City Level3 (Joe Breen)

2010-06-01 NDT tests between UEN EBC perfSONAR box and UEN Level 3 perfSONAR box

2010-06-01 NDT tests between UEN EBC perfSONAR box and UEN Level 3 perfSONAR box

2010-05-22 bwctl tests from TACC interactive node to Level 3 UofU-UEN perfSONAR node and UEN EBC perfSONAR node

2010-05-22 bwctl tests from TACC interactive node to UofU-UEN Level 3 perfSONAR node and UEN EBC perfSONAR node

2010-05-18 Installation of perfSONAR box at Level 3 UEN suite

Dave Richardson and Joe Clyde installed the perfSONAR box at Level 3 UEN suite. This Dell 1435 is identical in specifications as the perfSONAR box in the Eccles Broadcast Center. This device hangs directly off the Summit router but utilizes IP space from Internet2. The device is only available from Internet2 space and its direct communities. The box is 64.57.28.186 and the net is 64.57.28.185/29.

2010-05-15 bwctl tests from TACC interactive node and other sites

2010-05-15 bwctl tests from TACC interactive node to full path plus Los Angeles, NOAA CO and Seattle (Joe Breen)
2010-05-15 bwctl tests from NOAA CO to and from UEN (Joe Breen)

2010-05-08 bwctl tests from TACC interactive node

2010-05-08 bwctl tests from TACC interactive node (Joe Breen)

2010-04-20 IPerf tests from TACC interactive node

2010-04-20 IPerf tests from TACC interactive node

2010-04-19 BWCTL Tests from various 1G sites

Tests aborted due to Salt Lake City node not responding.
2010-04-19 BWCTL tests from various 1G sites

2010-04-16 Graph Snapshots

Grabbed some snapshots of the monthly graphs.
2010-04-16 UEN Perfsonar Graph snapshots

2010-04-13 Internet2 to UEN border tests (Jason Zurawski)

2010-04-13 Internet2 to UEN border tests

2010-04-12 CHPC to UEN border tests

2010-04-12 CHPC to UEN border tests

2010-04-09 TACC interactive node tests

2010-04-09 TACC interactive node tests

2010-04-05 TACC interactive node tests

Setup basic traceroutes and iperf performance tests from TACC interactive node: tg-login.ranger.tacc.teragrid.org.
2010-04-05 TACC interactive node tests

2010-03-30 Graph Snapshots

Grabbed month graph snapshots for ease of reference.
2010-03-30 UEN Perfsonar Graph snapshots

2010-03-15 NOAA Tests by Joe B. and 10G tests by David Richardson

Doing rigorous 3rd party tests between NOAA and the various I2 PoPs. The NOAA box is 1Gig.
2010-03-15 Tests between NOAA and I2 PoPs (Joe Breen)

Dave's tests are utilizing the new 10G NIC and getting some rigorous numbers from the different I2 sites.
2010-03-15 10G tests (David Richardson)

2010-03-12 UEN Perfsonar Graph snapshots

Dave Richardson (CHPC) and Joe Clyde (UEN) installed the perfsonar-uen 10Gig interface at 3pm yesterday Thu 3-11-2010. We are now able to see greater than Gig performance. We are still not able to see what expected transfer speeds.
2010-03-12 UEN Perfsonar Graph snapshots

2010-03-10 NOAA Tests by Joe B.

Doing tests to NOAA in Boulder, CO. Failed to capture the test results due to local issues with crashed machine and crashed ssh sessions. While doing the tests, I did notice that the NOAA box was seeing the same results as the UEN Perfsonar box. The NOAA box was seeing the same poor performance and it was similar to what we are seeing at UEN. The NOAA box is a good test because it goes over the same physical and logical infrastructure as the UEN Perfsonar.

2010-03-08 Tests by Joe B.

2010-03-08 Test of Internet2 to UEN (Joe Breen)

2010-03-03 Tests by Joe B. after UEN reboot

No real difference seen. Reboot did not flesh out anything obvious.

Houston

[chpcadmin@Knoppix ~]$ bwctl -c 205.122.102.10 -t 30 -i 2 -x -s bwctl.hous.net.internet2.edu
bwctl: Using tool: iperf
bwctl: 36 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 205.122.102.10 -s -f b -m -p 5008 -t 30 -i 2
bwctl: start_tool: 3476651719.649434
------------------------------------------------------------
Server listening on TCP port 5008
Binding to local address 205.122.102.10
TCP window size: 87380 Byte (default)
------------------------------------------------------------
[ 14] local 205.122.102.10 port 5008 connected with 64.57.16.131 port 5008
[ 14]  0.0- 2.0 sec  17985608 Bytes  71942432 bits/sec
[ 14]  2.0- 4.0 sec  26066896 Bytes  104267584 bits/sec
[ 14]  4.0- 6.0 sec  27089184 Bytes  108356736 bits/sec
[ 14]  6.0- 8.0 sec  29795496 Bytes  119181984 bits/sec
[ 14]  8.0-10.0 sec  30494880 Bytes  121979520 bits/sec
[ 14] 10.0-12.0 sec  33231600 Bytes  132926400 bits/sec
[ 14] 12.0-14.0 sec  34209000 Bytes  136836000 bits/sec
[ 14] 14.0-16.0 sec  36685080 Bytes  146740320 bits/sec
[ 14] 16.0-18.0 sec  38053440 Bytes  152213760 bits/sec
[ 14] 18.0-20.0 sec  40008240 Bytes  160032960 bits/sec
[ 14] 20.0-22.0 sec  41767560 Bytes  167070240 bits/sec
[ 14] 22.0-24.0 sec  43396560 Bytes  173586240 bits/sec
[ 14] 24.0-26.0 sec  45677160 Bytes  182708640 bits/sec
[ 14] 26.0-28.0 sec  46784880 Bytes  187139520 bits/sec
[ 14] 28.0-30.0 sec  49482504 Bytes  197930016 bits/sec
[ 14]  0.0-30.1 sec  542277632 Bytes  144315673 bits/sec
[ 14] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
bwctl: stop_exec: 3476651754.205015

RECEIVER END

SENDER START
bwctl: exec_line: /usr/local/bin/iperf -c 205.122.102.10 -B 64.57.16.131 -f b -m -p 5008 -t 30 -i 2
bwctl: start_tool: 3476651722.121369
------------------------------------------------------------
Client connecting to 205.122.102.10, TCP port 5008
Binding to local address 64.57.16.131
TCP window size: 65536 Byte (default)
------------------------------------------------------------
[  8] local 64.57.16.131 port 5008 connected with 205.122.102.10 port 5008
[  8]  0.0- 2.0 sec  19816448 Bytes  79265792 bits/sec
[  8]  2.0- 4.0 sec  25870336 Bytes  103481344 bits/sec
[  8]  4.0- 6.0 sec  26910720 Bytes  107642880 bits/sec
[  8]  6.0- 8.0 sec  30367744 Bytes  121470976 bits/sec
[  8]  8.0-10.0 sec  30228480 Bytes  120913920 bits/sec
[  8] 10.0-12.0 sec  33431552 Bytes  133726208 bits/sec
[  8] 12.0-14.0 sec  34209792 Bytes  136839168 bits/sec
[  8] 14.0-16.0 sec  36552704 Bytes  146210816 bits/sec
[  8] 16.0-18.0 sec  37462016 Bytes  149848064 bits/sec
[  8] 18.0-20.0 sec  40861696 Bytes  163446784 bits/sec
[  8] 20.0-22.0 sec  41181184 Bytes  164724736 bits/sec
[  8] 22.0-24.0 sec  43327488 Bytes  173309952 bits/sec
[  8] 24.0-26.0 sec  46137344 Bytes  184549376 bits/sec
[  8] 26.0-28.0 sec  46587904 Bytes  186351616 bits/sec
[  8] 28.0-30.0 sec  49324032 Bytes  197296128 bits/sec
[  8]  0.0-30.0 sec  542277632 Bytes  144475833 bits/sec
[  8] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
bwctl: stop_exec: 3476651752.189405

SENDER END

Los Angeles

[chpcadmin@Knoppix ~]$ bwctl -c 205.122.102.10 -t 30 -i 2 -x -s bwctl.losa.net.internet2.edu
bwctl: Using tool: iperf
bwctl: 37 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 205.122.102.10 -s -f b -m -p 5001 -t 30 -i 2
bwctl: start_tool: 3476652244.307723
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 205.122.102.10
TCP window size: 87380 Byte (default)
------------------------------------------------------------
[ 14] local 205.122.102.10 port 5001 connected with 64.57.17.135 port 5001
[ 14]  0.0- 2.0 sec  44651976 Bytes  178607904 bits/sec
[ 14]  2.0- 4.0 sec  56173712 Bytes  224694848 bits/sec
[ 14]  4.0- 6.0 sec  60794280 Bytes  243177120 bits/sec
[ 14]  6.0- 8.0 sec  65550960 Bytes  262203840 bits/sec
[ 14]  8.0-10.0 sec  70568280 Bytes  282273120 bits/sec
[ 14] 10.0-12.0 sec  75455280 Bytes  301821120 bits/sec
[ 14] 12.0-14.0 sec  80172864 Bytes  320691456 bits/sec
[ 14] 14.0-16.0 sec  85017872 Bytes  340071488 bits/sec
[ 14] 16.0-18.0 sec  89765864 Bytes  359063456 bits/sec
[ 14] 18.0-20.0 sec  94561640 Bytes  378246560 bits/sec
[ 14] 20.0-22.0 sec  99327008 Bytes  397308032 bits/sec
[ 14] 22.0-24.0 sec  104264688 Bytes  417058752 bits/sec
[ 14] 24.0-26.0 sec  60697264 Bytes  242789056 bits/sec
[ 14] 26.0-28.0 sec  56689200 Bytes  226756800 bits/sec
[ 14] 28.0-30.0 sec  61466152 Bytes  245864608 bits/sec
[ 14]  0.0-30.1 sec  1107165184 Bytes  294653014 bits/sec
[ 14] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
bwctl: stop_exec: 3476652279.140542

RECEIVER END

SENDER START
bwctl: exec_line: /usr/local/bin/iperf -c 205.122.102.10 -B 64.57.17.135 -f b -m -p 5001 -t 30 -i 2
bwctl: start_tool: 3476652247.087992
------------------------------------------------------------
Client connecting to 205.122.102.10, TCP port 5001
Binding to local address 64.57.17.135
TCP window size: 65536 Byte (default)
------------------------------------------------------------
[  8] local 64.57.17.135 port 5001 connected with 205.122.102.10 port 5001
[  8]  0.0- 2.0 sec  47333376 Bytes  189333504 bits/sec
[  8]  2.0- 4.0 sec  55517184 Bytes  222068736 bits/sec
[  8]  4.0- 6.0 sec  60399616 Bytes  241598464 bits/sec
[  8]  6.0- 8.0 sec  66527232 Bytes  266108928 bits/sec
[  8]  8.0-10.0 sec  70443008 Bytes  281772032 bits/sec
[  8] 10.0-12.0 sec  75390976 Bytes  301563904 bits/sec
[  8] 12.0-14.0 sec  79167488 Bytes  316669952 bits/sec
[  8] 14.0-16.0 sec  85360640 Bytes  341442560 bits/sec
[  8] 16.0-18.0 sec  90308608 Bytes  361234432 bits/sec
[  8] 18.0-20.0 sec  94093312 Bytes  376373248 bits/sec
[  8] 20.0-22.0 sec  98975744 Bytes  395902976 bits/sec
[  8] 22.0-24.0 sec  103997440 Bytes  415989760 bits/sec
[  8] 24.0-26.0 sec  61767680 Bytes  247070720 bits/sec
[  8] 26.0-28.0 sec  56303616 Bytes  225214464 bits/sec
[  8] 28.0-30.0 sec  61571072 Bytes  246284288 bits/sec
[  8]  0.0-30.0 sec  1107165184 Bytes  295099726 bits/sec
[  8] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
bwctl: stop_exec: 3476652277.132672

SENDER END

Seattle

[chpcadmin@Knoppix ~]$ bwctl -c 205.122.102.10 -t 30 -i 2 -x -s bwctl.seat.net.internet2.edu
bwctl: Using tool: iperf
bwctl: 35 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 205.122.102.10 -s -f b -m -p 5002 -t 30 -i 2
bwctl: start_tool: 3476652668.257157
------------------------------------------------------------
Server listening on TCP port 5002
Binding to local address 205.122.102.10
TCP window size: 87380 Byte (default)
------------------------------------------------------------
[ 14] local 205.122.102.10 port 5002 connected with 64.57.19.6 port 5002
[ 14]  0.0- 2.0 sec  35087936 Bytes  140351744 bits/sec
[ 14]  2.0- 4.0 sec  47769520 Bytes  191078080 bits/sec
[ 14]  4.0- 6.0 sec  56812280 Bytes  227249120 bits/sec
[ 14]  6.0- 8.0 sec  65616120 Bytes  262464480 bits/sec
[ 14]  8.0-10.0 sec  74585032 Bytes  298340128 bits/sec
[ 14] 10.0-12.0 sec  84535688 Bytes  338142752 bits/sec
[ 14] 12.0-14.0 sec  93707320 Bytes  374829280 bits/sec
[ 14] 14.0-16.0 sec  102870264 Bytes  411481056 bits/sec
[ 14] 16.0-18.0 sec  69210056 Bytes  276840224 bits/sec
[ 14] 18.0-20.0 sec  64117440 Bytes  256469760 bits/sec
[ 14] 20.0-22.0 sec  73058840 Bytes  292235360 bits/sec
[ 14] 22.0-24.0 sec  81891640 Bytes  327566560 bits/sec
[ 14] 24.0-26.0 sec  66072240 Bytes  264288960 bits/sec
[ 14] 26.0-28.0 sec  53282056 Bytes  213128224 bits/sec
[ 14] 28.0-30.0 sec  62637584 Bytes  250550336 bits/sec
[ 14]  0.0-30.0 sec  1032568832 Bytes  275009174 bits/sec
[ 14] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
bwctl: stop_exec: 3476652701.170652

RECEIVER END

SENDER START
bwctl: exec_line: /usr/local/bin/iperf -c 205.122.102.10 -B 64.57.19.6 -f b -m -p 5002 -t 30 -i 2
bwctl: start_tool: 3476652670.141888
------------------------------------------------------------
Client connecting to 205.122.102.10, TCP port 5002
Binding to local address 64.57.19.6
TCP window size: 87380 Byte (default)
------------------------------------------------------------
[  8] local 64.57.19.6 port 5002 connected with 205.122.102.10 port 5002
[  8]  0.0- 2.0 sec  36200448 Bytes  144801792 bits/sec
[  8]  2.0- 4.0 sec  48021504 Bytes  192086016 bits/sec
[  8]  4.0- 6.0 sec  56623104 Bytes  226492416 bits/sec
[  8]  6.0- 8.0 sec  65224704 Bytes  260898816 bits/sec
[  8]  8.0-10.0 sec  74547200 Bytes  298188800 bits/sec
[  8] 10.0-12.0 sec  84770816 Bytes  339083264 bits/sec
[  8] 12.0-14.0 sec  93765632 Bytes  375062528 bits/sec
[  8] 14.0-16.0 sec  102760448 Bytes  411041792 bits/sec
[  8] 16.0-18.0 sec  69525504 Bytes  278102016 bits/sec
[  8] 18.0-20.0 sec  64634880 Bytes  258539520 bits/sec
[  8] 20.0-22.0 sec  72982528 Bytes  291930112 bits/sec
[  8] 22.0-24.0 sec  81190912 Bytes  324763648 bits/sec
[  8] 24.0-26.0 sec  66461696 Bytes  265846784 bits/sec
[  8] 26.0-28.0 sec  53821440 Bytes  215285760 bits/sec
[  8] 28.0-30.0 sec  62029824 Bytes  248119296 bits/sec
[  8]  0.0-30.0 sec  1032568832 Bytes  275302272 bits/sec
[  8] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
bwctl: stop_exec: 3476652700.165834

SENDER END

Kansas

[chpcadmin@Knoppix ~]$ bwctl -c 205.122.102.10 -t 30 -i 2 -x -s bwctl.kans.net.internet2.edu
bwctl: Using tool: iperf
bwctl: 35 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 205.122.102.10 -s -f b -m -p 5003 -t 30 -i 2
bwctl: start_tool: 3476652860.481759
------------------------------------------------------------
Server listening on TCP port 5003
Binding to local address 205.122.102.10
TCP window size: 87380 Byte (default)
------------------------------------------------------------
[ 14] local 205.122.102.10 port 5003 connected with 64.57.16.214 port 5003
[ 14]  0.0- 2.0 sec  190717528 Bytes  762870112 bits/sec
[ 14]  2.0- 4.0 sec  234751208 Bytes  939004832 bits/sec
[ 14]  4.0- 6.0 sec  234749760 Bytes  938999040 bits/sec
[ 14]  6.0- 8.0 sec  234752656 Bytes  939010624 bits/sec
[ 14]  8.0-10.0 sec  234748312 Bytes  938993248 bits/sec
[ 14] 10.0-12.0 sec  234749760 Bytes  938999040 bits/sec
[ 14] 12.0-14.0 sec  234751208 Bytes  939004832 bits/sec
[ 14] 14.0-16.0 sec  234752656 Bytes  939010624 bits/sec
[ 14] 16.0-18.0 sec  234748312 Bytes  938993248 bits/sec
[ 14] 18.0-20.0 sec  234748312 Bytes  938993248 bits/sec
[ 14] 20.0-22.0 sec  234757000 Bytes  939028000 bits/sec
[ 14] 22.0-24.0 sec  234745416 Bytes  938981664 bits/sec
[ 14] 24.0-26.0 sec  234748312 Bytes  938993248 bits/sec
[ 14] 26.0-28.0 sec  234751208 Bytes  939004832 bits/sec
[ 14] 28.0-30.0 sec  234752656 Bytes  939010624 bits/sec
[ 14]  0.0-30.2 sec  3503816704 Bytes  927348005 bits/sec
[ 14] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
bwctl: stop_exec: 3476652893.495665

RECEIVER END

SENDER START
bwctl: exec_line: /usr/local/bin/iperf -c 205.122.102.10 -B 64.57.16.214 -f b -m -p 5003 -t 30 -i 2
bwctl: start_tool: 3476652862.312795
------------------------------------------------------------
Client connecting to 205.122.102.10, TCP port 5003
Binding to local address 64.57.16.214
TCP window size: 65535 Byte (default)
------------------------------------------------------------
[  8] local 64.57.16.214 port 5003 connected with 205.122.102.10 port 5003
[  8]  0.0- 2.0 sec  212901888 Bytes  851607552 bits/sec
[  8]  2.0- 4.0 sec  241090560 Bytes  964362240 bits/sec
[  8]  4.0- 6.0 sec  229040128 Bytes  916160512 bits/sec
[  8]  6.0- 8.0 sec  241090560 Bytes  964362240 bits/sec
[  8]  8.0-10.0 sec  229031936 Bytes  916127744 bits/sec
[  8] 10.0-12.0 sec  241033216 Bytes  964132864 bits/sec
[  8] 12.0-14.0 sec  229097472 Bytes  916389888 bits/sec
[  8] 14.0-16.0 sec  234340352 Bytes  937361408 bits/sec
[  8] 16.0-18.0 sec  235790336 Bytes  943161344 bits/sec
[  8] 18.0-20.0 sec  229040128 Bytes  916160512 bits/sec
[  8] 20.0-22.0 sec  241090560 Bytes  964362240 bits/sec
[  8] 22.0-24.0 sec  229040128 Bytes  916160512 bits/sec
[  8] 24.0-26.0 sec  241090560 Bytes  964362240 bits/sec
[  8] 26.0-28.0 sec  229031936 Bytes  916127744 bits/sec
[  8] 28.0-30.0 sec  241098752 Bytes  964395008 bits/sec
[  8]  0.0-30.1 sec  3503816704 Bytes  932297859 bits/sec
[  8] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
bwctl: stop_exec: 3476652892.405895

SENDER END

2010-03-02 Tests by Jason Z.

2010-03-01 Snapshots of Monthly bandwidth graphs from UEN perfsonar to I2 PoPs


2010-02-22 Real-time Troubleshooting at UEN


Kevin Q. and Joe B. did real-time troubleshooting to the 205.122.102.10 perfsonar box. Kevin monitored queues and looked for discards.

Called Kevin Q. and we looked at the graphs for bwctl.chpc and 205.122.102.10 (uen perfsonar).

small buffers on 6500 10G card
looking at drops on 1g interface to uen ps box - some bursts 76 vs
wash
chpcadmin@Knoppix ~$ bwctl -t 30 -s bwctl.wash.net.internet2.edu -c bwctl.hous.net.internet2.edu
bwctl: Using tool: iperf
bwctl: 36 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 2001:468:3:11::16:131 -s -f b -m -p 5003 -t 30 -V
bwctl: start_tool: 3475861985.534150
------------------------------------------------------------
Server listening on TCP port 5003
Binding to local address 2001:468:3:11::16:131
TCP window size: 87380 Byte (default)
------------------------------------------------------------
[ 14] local 2001:468:3:11::16:131 port 5003 connected with 2001:468:9:100::16:22 port 5003
[ 14] 0.0-30.6 sec 14113513472 Bytes 3684914455 bits/sec
[ 14] MSS size 8928 bytes (MTU 8968 bytes, unknown interface)
bwctl: stop_exec: 3475862019.640627

2:15pm
Kevin to change Gig connection to PE with DFC card. Queue size same, same burst output drops on order of 57-69 drops in 30s test; Kevin cranked output queue to 4096 packets/queue

40 packets /queue
chpcadmin@Knoppix ~$ bwctl -t 30 -s bwctl.hous.net.internet2.edu -c 205.122.102.10
bwctl: Using tool: iperf
bwctl: 37 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 205.122.102.10 -s -f b -m -p 5004 -t 30
bwctl: start_tool: 3475862808.379326
------------------------------------------------------------
Server listening on TCP port 5004
Binding to local address 205.122.102.10
TCP window size: 87380 Byte (default)
------------------------------------------------------------
[ 15] local 205.122.102.10 port 5004 connected with 64.57.16.131 port 5004
[ 15] 0.0-30.1 sec 418004992 Bytes 111061273 bits/sec
[ 15] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
bwctl: stop_exec: 3475862843.034768

RECEIVER END

4096 packets/queue 57 packets dropped

chpcadmin@Knoppix ~$ bwctl -t 30 -s bwctl.hous.net.internet2.edu -c 205.122.102.10
bwctl: Using tool: iperf
bwctl: 37 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 205.122.102.10 -s -f b -m -p 5005 -t 30
bwctl: start_tool: 3475862887.961470
------------------------------------------------------------
Server listening on TCP port 5005
Binding to local address 205.122.102.10
TCP window size: 87380 Byte (default)
------------------------------------------------------------
[ 15] local 205.122.102.10 port 5005 connected with 64.57.16.131 port 5005
[ 15] 0.0-30.1 sec 527040512 Bytes 140236467 bits/sec
[ 15] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
bwctl: stop_exec: 3475862922.713800

4096 packets/queue 4 packets

chpcadmin@Knoppix ~$ bwctl -t 30 -s bwctl.hous.net.internet2.edu -c 205.122.102.10
bwctl: Using tool: iperf
bwctl: 37 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 205.122.102.10 -s -f b -m -p 5008 -t 30
bwctl: start_tool: 3475863103.883208
------------------------------------------------------------
Server listening on TCP port 5008
Binding to local address 205.122.102.10
TCP window size: 87380 Byte (default)
------------------------------------------------------------
[ 15] local 205.122.102.10 port 5008 connected with 64.57.16.131 port 5008
[ 15] 0.0-30.1 sec 394772480 Bytes 105037982 bits/sec
[ 15] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
bwctl: stop_exec: 3475863138.851111

RECEIVER END

0 packet from slc and ks

From ANL (10G)

chpcadmin@Knoppix ~$ bwctl -t 30 -s anlborder-ps.it.anl.gov -c bwctl.salt.net.internet2.edu
bwctl: Using tool: iperf
bwctl: 36 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 64.57.17.214 -s -f b -m -p 5008 -t 30
bwctl: start_tool: 3475863695.649587
------------------------------------------------------------
Server listening on TCP port 5008
Binding to local address 64.57.17.214
TCP window size: 87380 Byte (default)
------------------------------------------------------------
[ 14] local 64.57.17.214 port 5008 connected with 130.202.222.58 port 5008
[ 14] 0.0-30.1 sec 3595010048 Bytes 956742021 bits/sec
[ 14] MSS size 8948 bytes (MTU 8988 bytes, unknown interface)
bwctl: stop_exec: 3475863730.042198

RECEIVER END
chpcadmin@Knoppix ~$ bwctl -t 30 -s anlborder-ps.it.anl.gov -c bwctl.chic.net.internet2.edu
bwctl: Using tool: iperf
bwctl: 36 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 64.57.17.2 -s -f b -m -p 5009 -t 30
bwctl: start_tool: 3475863796.855584
------------------------------------------------------------
Server listening on TCP port 5009
Binding to local address 64.57.17.2
TCP window size: 16777216 Byte (default)
------------------------------------------------------------
[ 14] local 64.57.17.2 port 5009 connected with 130.202.222.58 port 5009
[ 14] 0.0-30.0 sec 33127989248 Bytes 8833600745 bits/sec
[ 14] MSS size 8948 bytes (MTU 8988 bytes, unknown interface)
bwctl: stop_exec: 3475863831.044146

RECEIVER END
chpcadmin@Knoppix ~$ bwctl -t 30 -s anlborder-ps.it.anl.gov -c 205.122.102.10
bwctl: Using tool: iperf
bwctl: 38 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 205.122.102.10 -s -f b -m -p 5001 -t 30
bwctl: start_tool: 3475863884.758674
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 205.122.102.10
TCP window size: 87380 Byte (default)
------------------------------------------------------------
[ 15] local 205.122.102.10 port 5001 connected with 130.202.222.58 port 5001
[ 15] 0.0-30.1 sec 485261312 Bytes 129142691 bits/sec
[ 15] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
bwctl: stop_exec: 3475863921.046418

RECEIVER END
chpcadmin@Knoppix ~$ bwctl -t 30 -c anlborder-ps.it.anl.gov -s 205.122.102.10
bwctl: Using tool: iperf
bwctl: 38 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 130.202.222.58 -s -f b -m -p 5001 -t 30
bwctl: start_tool: 3475863950.269937
bind failed: Address already in use
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 130.202.222.58
TCP window size: 87380 Byte (default)
------------------------------------------------------------
bwctl: remote peer cancelled test
bwctl: stop_exec: 3475863959.560332

RECEIVER END
chpcadmin@Knoppix ~$ bwctl -t 30 -c anlborder-ps.it.anl.gov -s 205.122.102.10
bwctl: Using tool: iperf
bwctl: 36 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 130.202.222.58 -s -f b -m -p 5002 -t 30
bwctl: start_tool: 3475863981.888783
------------------------------------------------------------
Server listening on TCP port 5002
Binding to local address 130.202.222.58
TCP window size: 87380 Byte (default)
------------------------------------------------------------
[ 15] local 130.202.222.58 port 5002 connected with 205.122.102.10 port 5002
[ 15] 0.0-30.1 sec 3464757248 Bytes 921825239 bits/sec
[ 15] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
bwctl: stop_exec: 3475864016.203221

RECEIVER END

Running with 5 multiple threads

chpcadmin@Knoppix ~$ bwctl -t 30 -P 5 -s anlborder-ps.it.anl.gov -c 205.122.102.10
bwctl: Using tool: iperf
bwctl: 37 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 205.122.102.10 -s -f b -P 5 -m -p 5004 -t 30
bwctl: start_tool: 3475864303.271471
------------------------------------------------------------
Server listening on TCP port 5004
Binding to local address 205.122.102.10
TCP window size: 87380 Byte (default)
------------------------------------------------------------
[ 15] local 205.122.102.10 port 5004 connected with 130.202.222.58 port 5004
[ 16] local 205.122.102.10 port 5004 connected with 130.202.222.58 port 59133
[ 17] local 205.122.102.10 port 5004 connected with 130.202.222.58 port 59134
[ 18] local 205.122.102.10 port 5004 connected with 130.202.222.58 port 59135
[ 19] local 205.122.102.10 port 5004 connected with 130.202.222.58 port 59136
[ 17] 0.0-30.1 sec 140525568 Bytes 37351010 bits/sec
[ 17] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
[ 16] 0.0-30.1 sec 176873472 Bytes 47010887 bits/sec
[ 16] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
[ 15] 0.0-30.1 sec 370941952 Bytes 98588365 bits/sec
[ 15] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
[ 18] 0.0-30.1 sec 245506048 Bytes 65229258 bits/sec
[ 18] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
[ 19] 0.0-30.3 sec 165093376 Bytes 43587426 bits/sec
[ 19] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
SUM 0.0-30.3 sec 1098940416 Bytes 290138742 bits/sec
bwctl: stop_exec: 3475864336.208639

RECEIVER END

Running with 10 threads

chpcadmin@Knoppix ~$ bwctl -t 30 -P 10 -s anlborder-ps.it.anl.gov -c 205.122.102.10
bwctl: Using tool: iperf
bwctl: 37 seconds until test results available

RECEIVER START
bwctl: exec_line: /usr/local/bin/iperf -B 205.122.102.10 -s -f b -P 10 -m -p 5005 -t 30
bwctl: start_tool: 3475864371.506669
------------------------------------------------------------
Server listening on TCP port 5005
Binding to local address 205.122.102.10
TCP window size: 87380 Byte (default)
------------------------------------------------------------
[ 15] local 205.122.102.10 port 5005 connected with 130.202.222.58 port 5005
[ 16] local 205.122.102.10 port 5005 connected with 130.202.222.58 port 33015
[ 17] local 205.122.102.10 port 5005 connected with 130.202.222.58 port 33016
[ 18] local 205.122.102.10 port 5005 connected with 130.202.222.58 port 33017
[ 19] local 205.122.102.10 port 5005 connected with 130.202.222.58 port 33018
[ 20] local 205.122.102.10 port 5005 connected with 130.202.222.58 port 33020
[ 21] local 205.122.102.10 port 5005 connected with 130.202.222.58 port 33019
[ 22] local 205.122.102.10 port 5005 connected with 130.202.222.58 port 33023
[ 23] local 205.122.102.10 port 5005 connected with 130.202.222.58 port 33022
[ 24] local 205.122.102.10 port 5005 connected with 130.202.222.58 port 33021
[ 16] 0.0-30.0 sec 171950080 Bytes 45788152 bits/sec
[ 16] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
[ 23] 0.0-30.0 sec 126681088 Bytes 33731687 bits/sec
[ 23] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
[ 19] 0.0-30.1 sec 190308352 Bytes 50627136 bits/sec
[ 19] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
[ 17] 0.0-30.1 sec 191176704 Bytes 50855795 bits/sec
[ 17] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
[ 18] 0.0-30.1 sec 124379136 Bytes 33078230 bits/sec
[ 18] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
[ 22] 0.0-30.1 sec 126287872 Bytes 33581100 bits/sec
[ 22] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
[ 21] 0.0-30.1 sec 147603456 Bytes 39229134 bits/sec
[ 21] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
[ 24] 0.0-30.1 sec 127188992 Bytes 33795052 bits/sec
[ 24] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
[ 20] 0.0-30.1 sec 162480128 Bytes 43147244 bits/sec
[ 20] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
[ 15] 0.0-30.2 sec 162398208 Bytes 43074565 bits/sec
[ 15] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
SUM 0.0-30.2 sec 1530454016 Bytes 405938233 bits/sec
bwctl: stop_exec: 3475864404.373689

RECEIVER END

We can push 2-3G of aggregate but not single flow.

Kevin: Do we have internal 10G to 1G issues?

Sites that are successful are Kansas and Salt Lake - both devices are Gig attached. KQ: Is there something different regarding the setup of their perfsonar boxes and how they connect to the router?

<<KQ>> Open an I2 NOC ticket

<<JB>> engage Jeff Boote and Matt Z.

2010-02-18 Installation of Perfsonar box at UEN


2010-02-16 Network Performance Troubleshooting Meeting


2010-02-09 Dave R. and Joe B. findings


I2 Salt Lake City node tests to other I2 nodes

The following are the I2 Salt Lake City node tests to other I2 nodes

BWCTL I2 Salt Lake City to/from I2 Los Angeles, IPv4
SLC -> LA: ./bwctl -s bwctl.salt.net.internet2.edu -c bwctl.losa.net.internet2.edu
LA -> SLC: ./bwctl -s bwctl.losa.net.internet2.edu -c bwctl.salt.net.internet2.edu
test 1
SLC -> LA: 961.7
LA -> SLC: 961.7
test 2
SLC -> LA: 962.5
LA -> SLC: 947.7
test 3
SLC -> LA: 959.8
LA -> SLC: 961.3

BWCTL I2 Salt Lake City to/from I2 Seattle, IPv4
SLC -> Seattle: ./bwctl -s bwctl.salt.net.internet2.edu -c bwctl.seat.net.internet2.edu
Seattle -> SLC: ./bwctl -s bwctl.seat.net.internet2.edu -c bwctl.salt.net.internet2.edu
test 1
SLC -> Seattle: 970.5
Seattle -> SLC: 970.3
test 2
SLC -> Seattle: 971.6
Seattle -> SLC: 970.0
test 3
SLC -> Seattle: 970.0
Seattle -> SLC: 969.1

BWCTL I2 Salt Lake City to/from I2 Kansas City, IPv4
SLC -> Kansas: ./bwctl -s bwctl.salt.net.internet2.edu -c bwctl.kans.net.internet2.edu
Kansas -> SLC: ./bwctl -s bwctl.kans.net.internet2.edu -c bwctl.salt.net.internet2.edu
test 1
SLC -> Kansas: 954.2
Kansas -> SLC: 959.0
test 2
SLC -> Kansas: 954.3
Kansas -> SLC: 958.6
test 3
SLC -> Kansas: 954.8
Kansas -> SLC: 958.8

BWCTL I2 Salt Lake City to/from I2 Houston, IPv4
SLC -> Houston: ./bwctl -s bwctl.salt.net.internet2.edu -c bwctl.hous.net.internet2.
Houston -> SLC: ./bwctl -c bwctl.salt.net.internet2.edu -s bwctl.hous.net.internet2.
edu
test 1
SLC -> Houston: 941.5 mbit/sec
Houston -> SLC: 929.5 mbit/sec
test 2
SLC -> Houston: 935.3 mbit/sec
Houston -> SLC: 933.3 mbit/sec
test 3
SLC -> Houston: 935.6 mbit/sec
Houston -> SLC: 927.8 mbit/sec

BWCTL I2 Salt Lake City to/from I2 Chicago, IPv4
SLC -> Chicago: ./bwctl -s bwctl.salt.net.internet2.edu -c bwctl.chic.net.internet2.edu
Chicago -> SLC: ./bwctl -s bwctl.chic.net.internet2.edu -c bwctl.salt.net.internet2.edu
test 1
SLC -> Chicago: 965.1
Chicago -> SLC: 928.6
test 2
SLC -> Chicago: 966.1
Chicago -> SLC: 932.9
test 3
SLC -> Chicago: 965.1
Chicago -> SLC: 933.1

BWCTL I2 Salt Lake City to/from I2 Atlanta, IPv4
SLC -> Atlanta: ./bwctl -s bwctl.salt.net.internet2.edu -c bwctl.atla.net.internet2.edu
Atlanta -> SLC: ./bwctl -s bwctl.atla.net.internet2.edu -c bwctl.salt.net.internet2.edu
test 1
SLC -> Atlanta: 912.7
Atlanta -> SLC: 888.1
test 2
SLC -> Atlanta: 913.5
Atlanta -> SLC: 885.5
test 3
SLC -> Atlanta: 911.0
Atlanta -> SLC: 886.0

BWCTL I2 Salt Lake City to/from I2 Washington DC, IPv4
SLC -> Washington: ./bwctl -s bwctl.salt.net.internet2.edu -c bwctl.wash.net.internet2.edu
Washington -> SLC: ./bwctl -s bwctl.wash.net.internet2.edu -c bwctl.salt.net.internet2.edu
test 1
SLC -> Washington: 920.0
Washington -> SLC: 907.9
test 2
SLC -> Washington: 920.3
Washington -> SLC: 907.0
test 3
SLC -> Washington: 918.0
Washington -> SLC: 910.3

BWCTL I2 Salt Lake City to/from I2 New York City, IPv4
SLC -> New York: ./bwctl -s bwctl.salt.net.internet2.edu -c bwctl.newy.net.internet2.edu
New York -> SLC: ./bwctl -s bwctl.newy.net.internet2.edu -c bwctl.salt.net.internet2.edu
test 1
SLC -> New York: 902.1
New York -> SLC: 867.4
test 2
SLC -> New York: 900.0
New York -> SLC: 863.2
test 3
SLC -> New York: 898.8
New York -> SLC: 855.4

BWCTL I2 Salt Lake City to/from ESnet Iowa, IPv4
SLC -> Iowa: ./bwctl -s bwctl.salt.net.internet2.edu -c frown.es.net
Iowa -> SLC: ./bwctl -s frown.es.net -c bwctl.salt.net.internet2.edu
test 1
SLC -> Iowa: 846.7
Iowa -> SLC: 817.0
test 1
SLC -> Iowa: 845.7
Iowa -> SLC: 737.4
test 1
SLC -> Iowa: 839.8
Iowa -> SLC: 666.4

BWCTL I2 Salt Lake City to/from UCSC
SLC -> UCSC: ./bwctl -s bwctl.salt.net.internet2.edu -c bwctl.ucsc.edu
UCSC -> SLC: ./bwctl -s bwctl.ucsc.edu -c bwctl.salt.net.internet2.edu
test 1
SLC -> UCSC: 931.6
UCSC -> SLC: 945.0
test 1
SLC -> UCSC: 920.4
UCSC -> SLC: 946.4
test 1
SLC -> UCSC: 930.1
UCSC -> SLC: 946.5

Univ of Utah CHPC perfsonar 3.1.1 box to the various I2 nodes.

BWCTL bandwidth.chpc.utah.edu to/from I2 Los Angeles, IPv4
CHPC -> LA: ./bwctl -s bandwidth.chpc.utah.edu -c bwctl.losa.net.internet2.edu
LA -> CHPC: ./bwctl -s bwctl.losa.net.internet2.edu -c bandwidth.chpc.utah.edu
test 1
CHPC -> LA: 271.7
LA -> CHPC: 98.6
test 2
CHPC -> LA: 212.2
LA -> CHPC: 111.9
test 3
CHPC -> LA: 238.2
LA -> CHPC: 151.0

BWCTL bandwidth.chpc.utah.edu to/from I2 Seattle, IPv4
CHPC -> Seattle: ./bwctl -s bandwidth.chpc.utah.edu -c bwctl.seat.net.internet2.edu
Seattle -> CHPC: ./bwctl -s bwctl.seat.net.internet2.edu -c bandwidth.chpc.utah.edu
test 1
CHPC -> Seattle: 307.9
Seattle -> CHPC: 114.1
test 2
CHPC -> Seattle: 113.3
Seattle -> CHPC: 245.6
test 3
CHPC -> Seattle: 86.4
Seattle -> CHPC: 149.4

BWCTL bandwidth.chpc.utah.edu to/from I2 Kansas City, IPv4
CHPC -> Kansas: ./bwctl -s bandwidth.chpc.utah.edu -c bwctl.kans.net.internet2.edu
Kansas -> CHPC: ./bwctl -s bwctl.kans.net.internet2.edu -c bandwidth.chpc.utah.edu
test 1
CHPC -> Kansas: 360.9
Kansas -> CHPC: 163.6
test 2
CHPC -> Kansas: 344.2
Kansas -> CHPC: 259.3
test 3
CHPC -> Kansas: 358.2
Kansas -> CHPC: 205.8

BWCTL bandwidth.chpc.utah.edu to/from I2 Houston, IPv4
CHPC -> Houston: ./bwctl -s bandwidth.chpc.utah.edu -c bwctl.hous.net.internet2.edu
Houston -> CHPC: ./bwctl -c bandwidth.chpc.utah.edu -s bwctl.hous.net.internet2.edu
test 1
CHPC -> Houston: 163.1
Houston -> CHPC: 71.1
test 2
CHPC -> Houston: 287.8
Houston -> CHPC: 36.9
test 3
CHPC -> Houston: 153.3
Houston -> CHPC: 96.6

BWCTL bandwidth.chpc.utah.edu to/from I2 Chicago, IPv4
CHPC -> Chicago: ./bwctl -s bandwidth.chpc.utah.edu -c bwctl.chic.net.internet2.edu
Chicago -> CHPC: ./bwctl -s bwctl.chic.net.internet2.edu -c bandwidth.chpc.utah.edu
test 1
CHPC -> Chicago: 305.7
Chicago -> CHPC: 158.9
test 2
CHPC -> Chicago: 258.7
Chicago -> CHPC: 145.8
test 3
CHPC -> Chicago: 260.5
Chicago -> CHPC: 137.0

BWCTL bandwidth.chpc.utah.edu to/from I2 Atlanta, IPv4
CHPC -> Atlanta: ./bwctl -s bandwidth.chpc.utah.edu -c bwctl.atla.net.internet2.edu
Atlanta -> CHPC: ./bwctl -s bwctl.atla.net.internet2.edu -c bandwidth.chpc.utah.edu
test 1
CHPC -> Atlanta: 325.8
Atlanta -> CHPC: 190.5
test 2
CHPC -> Atlanta: 47.7
Atlanta -> CHPC: 329.0
test 3
CHPC -> Atlanta: 322.1
Atlanta -> CHPC: 187.8

BWCTL bandwidth.chpc.utah.edu to/from I2 Washington DC, IPv4
CHPC -> Washington: ./bwctl -s bandwidth.chpc.utah.edu -c bwctl.wash.net.internet2.edu
Washington -> CHPC: ./bwctl -s bwctl.wash.net.internet2.edu -c bandwidth.chpc.utah.edu
test 1
CHPC -> Washington: 253.1
Washington -> CHPC: 223.5
test 2
CHPC -> Washington: 264.9
Washington -> CHPC: 178.3
test 3
CHPC -> Washington: 302.6
Washington -> CHPC: 90.5

BWCTL bandwidth.chpc.utah.edu to/from I2 New York City, IPv4
CHPC -> New York: ./bwctl -s bandwidth.chpc.utah.edu -c bwctl.newy.net.internet2.edu
New York -> CHPC: ./bwctl -s bwctl.newy.net.internet2.edu -c bandwidth.chpc.utah.edu
test 1
CHPC -> New York: 45.6
New York -> CHPC: 210.0
test 2
CHPC -> New York: 47.2
New York -> CHPC: 265.5
test 3
CHPC -> New York: 116.5
New York -> CHPC: 309.3

BWCTL bandwidth.chpc.utah.edu to/from UCSC, IPv4
CHPC -> UCSC: ./bwctl -s bandwidth.chpc.utah.edu -c bwctl.ucsc.edu
UCSC -> CHPC: ./bwctl -s bwctl.ucsc.edu -c bandwidth.chpc.utah.edu
test 1
CHPC -> UCSC: 131.1
UCSC -> CHPC: 45.7
test 2
CHPC -> UCSC: 132.2
UCSC -> CHPC: 92.7
test 3
CHPC -> UCSC: 132.0
UCSC -> CHPC: 67.5

BWCTL bandwidth.chpc.utah.edu to/from ESnet Iowa
CHPC -> Iowa: ./bwctl -s bandwidth.chpc.utah.edu -c frown.es.net
Iowa -> CHPC: ./bwctl -s frown.es.net -c bandwidth.chpc.utah.edu
test 1
CHPC -> Iowa: 65.2
Iowa -> CHPC: 37.8
test 2
CHPC -> Iowa: 55.5
Iowa -> CHPC: 36.6
test 3
CHPC -> Iowa: 43.3
Iowa -> CHPC: 40.9

Los Angeles to nodes around the I2 backbone.

BWCTL I2 Los Angeles to I2 New York, IPv6
LA -> NY: ./bwctl -s bwctl.losa.net.internet2.edu -c bwctl.newy.net.internet2.edu
NY -> LA: ./bwctl -s bwctl.newy.net.internet2.edu -c bwctl.losa.net.internet2.edu
test 1
LA -> NY: 4486.8
NY -> LA: 6063.3
test 2
LA -> NY: 4234.6
NY -> LA: 5408.6
test 3
LA -> NY: 3395.1
NY -> LA: 6118.5

BWCTL I2 Los Angeles to I2 New York, IPv4
LA -> NY: ./bwctl -s 64.57.17.135 -c 64.57.17.66
NY -> LA: ./bwctl -s 64.57.17.66 -c 64.57.17.135
test 1
LA -> NY: 5280.7
NY -> LA: 5276.4
test 2
LA -> NY: 6443.2
NY -> LA: 4102.2
test 3
LA -> NY: 6303.7
NY -> LA: 4668.0

I2 10-gig bwctl hosts:
losa (Los Angeles)
seat (Seattle)
newy (New York)
hous (Houston)
atla (Atlanta)

I2 1-gig bwctl hosts:
salt (Salt Lake City)
kans (Kansas City)
wash (Washington, DC)

2010-02-08 Dave R. and Joe B. findings


All tests performed on 2010-02-08, between 1500 and 1700

NDT Numenor (my desktop at CHPC) to coffee.net.utah.edu, IPv4
numenor.chpc.utah.edu to coffee.net.utah.edu
test 1
running 10s outbound test (client to server) . . . . . 868.55 Mb/s
running 10s inbound test (server to client) . . . . . . 921.76 Mb/s
test 2
running 10s outbound test (client to server) . . . . . 916.38 Mb/s
running 10s inbound test (server to client) . . . . . . 937.11 Mb/s
test 3
running 10s outbound test (client to server) . . . . . 919.42 Mb/s
running 10s inbound test (server to client) . . . . . . 937.38 Mb/s

NDT Numenor (my desktop at CHPC) to speedtest.uen.net, IPv4
numenor.chpc.utah.edu to speedtest.uen.net
test 1
running 10s outbound test (client to server) . . . . . 456.12 Mb/s
running 10s inbound test (server to client) . . . . . . 473.24 Mb/s
test 2
running 10s outbound test (client to server) . . . . . 442.43 Mb/s
running 10s inbound test (server to client) . . . . . . 464.41 Mb/s
test 3
running 10s outbound test (client to server) . . . . . 442.81 Mb/s
running 10s inbound test (server to client) . . . . . . 467.91 Mb/s

NDT Numenor (my desktop at CHPC) to nettest.boulder.noaa.gov, IPv4
numenor.chpc.utah.edu to nettest.boulder.noaa.gov
test 1
running 10s outbound test (client to server) . . . . . 70.74 Mb/s
running 10s inbound test (server to client) . . . . . . 306.20 Mb/s
test 2
running 10s outbound test (client to server) . . . . . 72.72 Mb/s
running 10s inbound test (server to client) . . . . . . 214.61 Mb/s
test 3
running 10s outbound test (client to server) . . . . . 72.99 Mb/s
running 10s inbound test (server to client) . . . . . . 290.83 Mb/s

BWCTL CHPC to/from Internet2 SLC pop, IPv4
I2 Salt -> CHPC: ./bwctl -s bwctl.salt.net.internet2.edu -c bandwidth.chpc.utah.edu
CHPC -> I2 Salt: ./bwctl -c bwctl.salt.net.internet2.edu -s bandwidth.chpc.utah.edu
test 1
I2 Salt -> CHPC 480.2 mbit/sec
CHPC -> I2 Salt 388.4 mbit/sec
test 2
I2 Salt -> CHPC 486.8 mbit/sec
CHPC -> I2 Salt 358.7 mbit/sec
test 3
I2 Salt -> CHPC 501.9 mbit/sec
CHPC -> I2 Salt 346.5 mbit/sec

BWCTL CHPC to/from NOAA in Boulder, CO, IPv4
boulder -> chpc: ./bwctl -s nettest.boulder.noaa.gov -c bandwidth.chpc.utah.edu
chpc -> boulder: ./bwctl -c nettest.boulder.noaa.gov -s bandwidth.chpc.utah.edu
test 1
boulder -> chpc 122.4 mbit/sec
chpc -> boulder 165.1 mbit/sec
test 2
boulder -> chpc 165.5 mbit/sec
chpc -> boulder 184.9 mbit/sec
test 3
boulder -> chpc 192.8 mbit/sec
chpc -> boulder 184.6 mbit/sec

BWCTL NOAA in Boulder, CO to/from Internet2 SLC pop, IPv4
I2 Salt -> Boulder: ./bwctl -s bwctl.salt.net.internet2.edu -c nettest.boulder.noaa.gov
Boulder -> I2 Salt: ./bwctl -c bwctl.salt.net.internet2.edu -s nettest.boulder.noaa.gov
test 1
boulder -> salt 924.0 mbit/sec
salt -> boulder 921.8 mbit/sec
test 2
boulder -> salt 924.3 mbit/sec
salt -> boulder 923.4 mbit/sec
test 3
boulder -> salt 923.5 mbit/sec
salt -> boulder 921.8 mbit/sec

NDT from my home on Comcast, to speedtest.uen.net IPv4
24.2.80.23 to speedtest.uen.net
test 1
running 10s outbound test (client to server) . . . . . 2.69 Mb/s
running 10s inbound test (server to client) . . . . . . 16.90 Mb/s
test 2
running 10s outbound test (client to server) . . . . . 2.77 Mb/s
running 10s inbound test (server to client) . . . . . . 16.65 Mb/s
test 3
running 10s outbound test (client to server) . . . . . 2.90 Mb/s
running 10s inbound test (server to client) . . . . . . 13.67 Mb/s

NDT from my home on Comcast, to CHPC IPv4
24.2.80.23 to bandwidth.chpc.utah.edu
test 1, ipv4
running 10s outbound test (client to server) . . . . . 2.81 Mb/s
running 10s inbound test (server to client) . . . . . . 2.28 Mb/s
test 2, ipv4
running 10s outbound test (client to server) . . . . . 2.81 Mb/s
running 10s inbound test (server to client) . . . . . . 2.88 Mb/s
test 3, ipv4
running 10s outbound test (client to server) . . . . . 2.67 Mb/s
running 10s inbound test (server to client) . . . . . . 2.64 Mb/s

NDT from my home on Comcast (over Hurricane Electric tunnel), to CHPC, IPv6
2001:470:1f04:230::2 to bandwidth.chpc.utah.edu
test 1, ipv6 (tunnel via HE)
running 10s outbound test (client to server) . . . . . 3.23 Mb/s
running 10s inbound test (server to client) . . . . . . 7.52 Mb/s
test 2, ipv6 (tunnel via HE)
running 10s outbound test (client to server) . . . . . 3.26 Mb/s
running 10s inbound test (server to client) . . . . . . 9.74 Mb/s
test 3, ipv6 (tunnel via HE)
running 10s outbound test (client to server) . . . . . 3.19 Mb/s
running 10s inbound test (server to client) . . . . . . 14.91 Mb/s

2010-02-02 Tim Urban findings


Sheet1
A B C D E F G H I J K
1 Coffee Out Coffee In SLC NDT Out SLC NDT In SpeedtestUEN Out SpeedtestUEN In Chicago NDT Out Chicago NDT In Atlanta NDT Out Atlanta NDT Out
2 Tims Laptop (two fwsm on NOC Network) 208.39 514.99 236.18 466.82 275.88 472.95 9.71 51.1 7.26 8.76
3 Tims Laptop (bypass all firewalls on flux network, Route from r1-park) 201.96 616.04 516.11 481.63 610.98 27.35 13.51 17.81 8.85
4 Tims Laptop ( 205.127.87.152 no firewall at UEN ) 125.08 447.31 336.45 507.21 27.83 12.89

Tim workstation on flux network bypass firewall to chicago

    • Starting test 1 of 1 **
      Connected to: ndt.chic.net.internet2.edu – Using IPv4 address
      Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
      checking for firewalls . . . . . . . . . . . . . . . . . . . Done
      running 10s outbound test (client-to-server C2S) . . . . . 27.38Mb/s
      running 10s inbound test (server-to-client S2C) . . . . . . 13.51Mb/s
      The slowest link in the end-to-end path is a 1.0 Gbps Gigabit Ethernet subnet

Connected to: ndt.salt.net.internet2.edu – Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
checking for firewalls . . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client-to-server C2S) . . . . . 516.11Mb/s
running 10s inbound test (server-to-client S2C) . . . . . . 481.63Mb/s

Connected to: coffee.utah.edu – Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
checking for firewalls . . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client-to-server C2S) . . . . . 201.96Mb/s
running 10s inbound test (server-to-client S2C) . . . . . . 616.04Mb/s
The slowest link in the end-to-end path is a 10 Gbps 10 Gigabit Ethernet/OC-192 subnet

    • Starting test 1 of 1 **
      Connecting to 'speedtest.uen.net' speedtest.uen.net/205.122.210.3 to run test
      Connected to: speedtest.uen.net – Using IPv4 address
      Another client is currently being served, your test will begin within 45 seconds
      Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
      checking for firewalls . . . . . . . . . . . . . . . . . . . Done
      running 10s outbound test (client-to-server C2S) . . . . . 610.98Mb/s
      running 10s inbound test (server-to-client S2C) . . . . . . Protocol error!
      S2C throughput test FAILED!
    • Starting test 1 of 1 **
      Connected to: ndt.atla.net.internet2.edu – Using IPv4 address
      Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
      checking for firewalls . . . . . . . . . . . . . . . . . . . Done
      running 10s outbound test (client-to-server C2S) . . . . . 17.81Mb/s
      running 10s inbound test (server-to-client S2C) . . . . . . 8.85Mb/s
      The slowest link in the end-to-end path is a 1.0 Gbps Gigabit Ethernet subnet
      S2C: Packet queuing detected

Test from Tim's Workstation( Gig Link fswm in Tim's Workstation Network) to Coffee NDT

TCP/Web100 Network Diagnostic Tool v5.4.12
click START to begin
Connected to: coffee.utah.edu – Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
checking for firewalls . . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client-to-server C2S) . . . . . 208.39Mb/s
running 10s inbound test (server-to-client S2C) . . . . . . 514.99Mb/s
The slowest link in the end-to-end path is a 10 Gbps 10 Gigabit Ethernet/OC-192 subnet

Test from Tim's Workstation( Gig Link not behind local fswm still routes through border fw vlan 575) to Coffee NDT

Connected to: coffee.utah.edu – Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
checking for firewalls . . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client-to-server C2S) . . . . . 236.18Mb/s
running 10s inbound test (server-to-client S2C) . . . . . . 466.82Mb/s
The slowest link in the end-to-end path is a 10 Gbps 10 Gigabit Ethernet/OC-192 subnet

Test from Tim's Workstation( Gig Link fswm in Tim's Workstation Network) to Salt Lake NDT

    • Starting test 1 of 1 **
      Connected to: ndt.salt.net.internet2.edu – Using IPv4 address
      Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
      checking for firewalls . . . . . . . . . . . . . . . . . . . Done
      running 10s outbound test (client-to-server C2S) . . . . . 252.46Mb/s
      running 10s inbound test (server-to-client S2C) . . . . . . 349.63Mb/s
      The slowest link in the end-to-end path is a a 622 Mbps OC-12 subnet

Test from Tim's Workstation( Gig Link fswm in Tim's Workstation Network) to speedtest uen NDT

Connecting to 'speedtest.uen.net' speedtest.uen.net/205.122.210.3 to run test
Connected to: speedtest.uen.net – Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
checking for firewalls . . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client-to-server C2S) . . . . . 275.88Mb/s
running 10s inbound test (server-to-client S2C) . . . . . . 472.95Mb/s
The slowest link in the end-to-end path is a a 622 Mbps OC-12 subnet

Test from Tim's Workstation( Gig Link not behind local fswm still routes through border fw vlan 575) to speedtest uen NDT

Connecting to 'speedtest.uen.net' speedtest.uen.net/205.122.210.3 to run test
Connected to: speedtest.uen.net – Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
checking for firewalls . . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client-to-server C2S) . . . . . 376.51Mb/s
running 10s inbound test (server-to-client S2C) . . . . . . 495.67Mb/s
The slowest link in the end-to-end path is a a 622 Mbps OC-12 subnet

Test from Tim's Workstation( Gig Link fswm in Tim's Workstation Network) to Chicago NDT

    • Starting test 1 of 1 **
      Connected to: ndt.chic.net.internet2.edu – Using IPv4 address
      Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
      checking for firewalls . . . . . . . . . . . . . . . . . . . Done
      running 10s outbound test (client-to-server C2S) . . . . . 9.71Mb/s
      running 10s inbound test (server-to-client S2C) . . . . . . 51.10Mb/s
      The slowest link in the end-to-end path is a 1.0 Gbps Gigabit Ethernet subnet
      S2C: Packet queuing detected

Test from Tim's Workstation( ( Gig Link not behind local fswm still routes through border fw vlan575) to Chicago NDT

TCP/Web100 Network Diagnostic Tool v5.5.4a
click START to begin

    • Starting test 1 of 1 **
      Connected to: ndt.chic.net.internet2.edu – Using IPv4 address
      Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
      checking for firewalls . . . . . . . . . . . . . . . . . . . Done
      running 10s outbound test (client-to-server C2S) . . . . . 14.35Mb/s
      running 10s inbound test (server-to-client S2C) . . . . . . 13.37Mb/s
      The slowest link in the end-to-end path is a a 622 Mbps OC-12 subnet
      S2C: Packet queuing detected

Test from Tim's Workstation( Gig Link fswm in Tim's Workstation Network) to Atlanta NDT

WEB100 Enabled Statistics:
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
checking for firewalls . . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client-to-server C2S) . . . . . 7.26Mb/s
running 10s inbound test (server-to-client S2C) . . . . . . 8.76Mb/s

------ Client System Details ------
OS data: Name = Windows 7, Architecture = x86, Version = 6.1
Java data: Vendor = Sun Microsystems Inc., Version = 1.6.0_18

------ Web100 Detailed Analysis ------
1 Gbps GigabitEthernet link found.
Link set to Full Duplex mode
No network congestion discovered.
Good network cable(s) found
Normal duplex operation found.

Web100 reports the Round trip time = 56.41 msec; the Packet size = 1380 Bytes; and
No packet loss - but packets arrived out-of-order 1.04% of the time
S2C throughput test: Packet queuing detected: 65.76%
This connection is receiver limited 37.58% of the time.
Increasing the the client's receive buffer (63.0 KB) will improve performance
This connection is sender limited 58.63% of the time.
This connection is network limited 3.79% of the time.

Web100 reports TCP negotiated the optional Performance Settings to:
RFC 2018 Selective Acknowledgment: ON
RFC 896 Nagle Algorithm: ON
RFC 3168 Explicit Congestion Notification: OFF
RFC 1323 Time Stamping: OFF
RFC 1323 Window Scaling: OFF

Server 'ndt.atla.net.internet2.edu' is not behind a firewall. Connection to the ephemeral port was successful
Client is probably behind a firewall. Connection to the ephemeral port failed
Information: Network Middlebox is modifying MSS variable
Server IP addresses are preserved End-to-End
Client IP addresses are preserved End-to-End

--------------------------------------------------------------------------------------

Date: Jan 2,2010 11:40am
Called Troy at UEN on 100mb link:

    • Starting test 1 of 1 **
      Connected to: ndt.salt.net.internet2.edu – Using IPv4 address
      Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
      checking for firewalls . . . . . . . . . . . . . . . . . . . Done
      running 10s outbound test (client-to-server C2S) . . . . . 27.21Mb/s
      running 10s inbound test (server-to-client S2C) . . . . . . 53.66Mb/s
      The slowest link in the end-to-end path is a a 100 Mbps OC-12 subnet

TCP/Web100 Network Diagnostic Tool v5.4.12
click START to begin
Connected to: coffee.utah.edu – Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
checking for firewalls . . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client-to-server C2S) . . . . . 93.49Mb/s
running 10s inbound test (server-to-client S2C) . . . . . . 85.09Mb/s

uen from 205.127.233.56 uen network also has fwsm

    • Starting test 1 of 1 **
      Connected to: ndt.chic.net.internet2.edu – Using IPv4 address
      Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
      checking for firewalls . . . . . . . . . . . . . . . . . . . Done
      running 10s outbound test (client-to-server C2S) . . . . . 10.73Mb/s
      running 10s inbound test (server-to-client S2C) . . . . . . 13.09Mb/s
      The slowest link in the end-to-end path is a a 622 Mbps OC-12 subnet
      S2C: Packet queuing detected

TCP/Web100 Network Diagnostic Tool v5.5.4a
click START to begin

    • Starting test 1 of 1 **
      Connected to: ndt.wash.net.internet2.edu – Using IPv4 address
      Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
      checking for firewalls . . . . . . . . . . . . . . . . . . . Done
      running 10s outbound test (client-to-server C2S) . . . . . 6.75Mb/s
      running 10s inbound test (server-to-client S2C) . . . . . . 8.94Mb/s
      The slowest link in the end-to-end path is a 1.0 Gbps Gigabit Ethernet subnet
      S2C: Packet queuing detected

TCP/Web100 Network Diagnostic Tool v3.5.14
click START to begin

    • Starting test 1 of 1 **
      Connecting to 'speedtest.uen.net' speedtest.uen.net/205.122.210.3 to run test
      Connected to: speedtest.uen.net – Using IPv4 address
      Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
      checking for firewalls . . . . . . . . . . . . . . . . . . . Done
      running 10s outbound test (client-to-server C2S) . . . . . 336.45Mb/s
      running 10s inbound test (server-to-client S2C) . . . . . . 507.21Mb/s
      The slowest link in the end-to-end path is a a 622 Mbps OC-12 subnet

Connected to: coffee.utah.edu – Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
checking for firewalls . . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client-to-server C2S) . . . . . 112.50Mb/s
running 10s inbound test (server-to-client S2C) . . . . . . 441.24Mb/s
The slowest link in the end-to-end path is a 10 Gbps 10 Gigabit Ethernet/OC-192 subnet

click START to re-test
Connected to: coffee.utah.edu – Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
checking for firewalls . . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client-to-server C2S) . . . . . 125.08Mb/s
running 10s inbound test (server-to-client S2C) . . . . . . 447.31Mb/s
The slowest link in the end-to-end path is a 10 Gbps 10 Gigabit Ethernet/OC-192 subnet

UEN connection from uen PE device same as uofu dist outsdide 205.127.87.152

Connected to: ndt.chic.net.internet2.edu – Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
checking for firewalls . . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client-to-server C2S) . . . . . 27.83Mb/s
running 10s inbound test (server-to-client S2C) . . . . . . 12.89Mb/s

Test from Uen core:

Joe B and Dave R initial findings


I would like to open a helpdesk ticket regarding Network Performance issues out of the University of Utah. Tim Urban, Bryan Morris and I have discussed some of these issues informally over the past few weeks, however, I would like to now open a helpdesk ticket to start the documentation process.

What we know:

  • Inside the UofU campus, Gig speeds are attainable
  • Immediately outside the UofU campus but within Salt Lake, the speeds drop to 500-600Mb from a host within the University of Utah.
  • From the Internet2 PoP in Salt Lake to the Internet2 PoP in Houston, speeds of 900Mb/s (from a 1Gig device) are sustainable
  • From within the campus to the Houston Internet2 PoP, we are seeing around 45Mb/s from a 1Gig device within the University of Utah
  • The UofU Campus firewall should be able to sustain (5) 1Gig flows simultaneously.
  • The UofU campus firewall CANNOT sustain a single flow that is greater than 1Gig.
  • Bypassing the UofU campus firewall allows us to sustain 4-6Gig/s flows from a 10Gig device.

Questions:

  • Why do we only see 45Mb/s to the Internet2 PoP in Texas (limit of 1G at measurement device in Texas) ?
  • Why do we drop to 500-600Mb between the UofU campus and the Internet2 Salt Lake City PoP (limit of 1G at measurement device in SLC I2 PoP)?
  • What has changed in the past 1.5 yrs at the campus border? We were previously able to sustain 700-800Mb/s (congestion limited).

Dave Richardson has been working on a PerfSonar setup within CHPC. This framework is an active measurement framework deployed by Internet2, GEANT, ES-Net and most of the R&E networks and many of the government networks. His work has been able to document the speeds to the various Internet2 PoPs. We can send more detail in a follow-up note

Tom Ammon has been working on a project with SuperComputing 2009. His work has documented the 4-6Gig flows because he has a separate pipe around the campus firewall to do the collaborative work. We will send more detail in a follow up note.

Other References

http://www.cisl.ucar.edu/nets/tools/NPtoolkit/

  • No labels