Upgrade to Pro — share decks privately, control downloads, hide ads and more …

10gigabit_test_v1_20121128.pdf

 10gigabit_test_v1_20121128.pdf

Yuuki Tsubouchi (yuuk1)

March 25, 2014
Tweet

More Decks by Yuuki Tsubouchi (yuuk1)

Transcript

  1. 10_Gigabit_Ethernet_Test_Report file:10gigabit_test_v1_20121128.doc Page 1 of 7 Title: 10 Gigabit Ethernet

    Test Report Project: APD camera Version: 1.0 Date: 28 November 2012 Written by: P. Schmidt Checked by: D. Dunai Document history: Original To be done: Document access: Open document Can be shared with: Adimtech webpage Copyright © AdimTech Kft. 2012
  2. 10_Gigabit_Ethernet_Test_Report file:10gigabit_test_v1_20121128.doc Page 2 of 7 1 Overview APDCAM-10G camera

    is a camera system designed for ultra-high speed digital imaging applications in the visible wavelength range, where a low spatial resolution is acceptable. The sensor of the camera is a two-dimensional Avalanche PhotoDiode (APD) array. The detector is variable upto 128 channels and consist of 1 to 4 4by 8 channels single detector array.The output from each diode of the detector is amplified by a low-noise amplifier chain and digitised inside the camera. The resulting digital data flow is organized into a UDP packages and transmitted to a PC via a digital optical fibre link using 10 Gigabit Ethernet protocol. The UDP stream is produced by a special 10G Ethernet controller card (Adimtech Communication and Control card) using built in FPGA. The aim of this report is to characterise the maximum data flow that can be saved into a PC memory using 10G protocol. UDP protocol does not contain acknowledgment feedback, so it is not reliable. It is used mainly for audio-, video streams. The expected data flow from the APD camera is 8.96Gb/sec (128 channels * 5Ms* 14bit). One has to note that the aim of this test is not the test of Ethernet network facilities (routers, switches) and their effect of the observed bandwidth, but to identify the accessible data rate between directly connected computers or other 10G devices. At this stage of the development only on APDCAM-10G is planned to be connected to on DAQ PC, and other network devices are not installed in the communication line. The physical layer of the communication is practically a single duplex optical fiber. 2 Test method As a first approach two PCs were connected directly. The IP addresses were set manually to the same subnet. Although in the camera UDP packages are used for streaming, in the tests we often used TCP protocol as it is a reliable protocol. The transfer rate measured with TCP connection measurements will be lower limit for UDP package transfer rate, as TCP acknowledgement and resending of lost packages causes traffic overhead compared to UDP communication. Special notes: Switch off every firewall and packet filter applications! It is important to install the 10Gb card to a PCI-E slot with at least 8x mode. 3 Test hardware At this stage of the development of the Adimtech C&C card is still in the development phase and not available for tests. An alternative solution to perform these tests is to use to two PCs with 10G Ethernet cards and directly connect them with optical fibers. The details of the test hardware is shown below: PC1  RAM: 8 GB DDR3  CPU: Intel Core i5-2400 3.1 GHz
  3. 10_Gigabit_Ethernet_Test_Report file:10gigabit_test_v1_20121128.doc Page 3 of 7  MB: ASUS P8P67

    Deluxe  OS: Fedora 17, Window 7 Professional  NIC: Myricom Myri-10G-PCIE-8A-R PCI-E x8 PC2  RAM: 8 GB DDR3  CPU: Intel Core i5-2400 3.2 GHz  MB: ASUS P8P67 Deluxe  OS: Ubuntu 10.4, Windows 7 Professional  NIC: Myricom Myri-10G-PCIE-8A-R PCI-E x8 4 Test results The measured data flow is strongly dependent on the hardware and software environment. A Linux and Windows operating system were tested. The free traffic analyzer Iperf was used on both operating systems. This program is capable for measuring maximum TCP and UDP bandwidth performance. Iperf allows the tuning of various parameters and UDP characteristics. Iperf reports bandwith, delay jitter, datagram loss. 4.1 Linux OS For optimal performance the network buffer parameters should be increased before using 10G network. Please add the following lines to /etc/sysctl.conf net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.core.netdev_max_backlog = 250000 and execute the command sysctl -p /etc/sysctl.conf The first test is running server and client on the same PC to test PCI-E bandwidth. PC1: iperf ‐c 127.0.0.1 ‐P 1 ‐i 1 ‐p 5001 ‐f M ‐t 10 ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Client connecting to 127.0.0.1, TCP port 5001 TCP window size: 0.16 MByte (default) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ [ 3] local 127.0.0.1 port 50903 connected with 127.0.0.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0‐ 1.0 sec 5086 MBytes 5086 MBytes/sec [ 3] 1.0‐ 2.0 sec 5145 MBytes 5145 MBytes/sec [ 3] 2.0‐ 3.0 sec 5144 MBytes 5144 MBytes/sec [ 3] 3.0‐ 4.0 sec 5127 MBytes 5127 MBytes/sec [ 3] 4.0‐ 5.0 sec 5124 MBytes 5124 MBytes/sec
  4. 10_Gigabit_Ethernet_Test_Report file:10gigabit_test_v1_20121128.doc Page 4 of 7 [ 3] 5.0‐ 6.0

    sec 5116 MBytes 5116 MBytes/sec [ 3] 6.0‐ 7.0 sec 5121 MBytes 5121 MBytes/sec [ 3] 7.0‐ 8.0 sec 5123 MBytes 5123 MBytes/sec [ 3] 8.0‐ 9.0 sec 5116 MBytes 5116 MBytes/sec [ 3] 9.0‐10.0 sec 5120 MBytes 5120 MBytes/sec [ 3] 0.0‐10.0 sec 51223 MBytes 5122 MBytes/sec iperf ‐s ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ [ 5] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 50903 [ ID] Interval Transfer Bandwidth [ 5] 0.0‐10.0 sec 50.0 GBytes 42.9 Gbits/sec PC2: iperf ‐c 127.0.0.1 ‐P 1 ‐i 1 ‐p 5001 ‐f M ‐t 10 ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Client connecting to 127.0.0.1, TCP port 5001 TCP window size: 0.05 MByte (default) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ [ 3] local 127.0.0.1 port 45360 connected with 127.0.0.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0‐ 1.0 sec 2641 MBytes 2641 MBytes/sec [ 3] 1.0‐ 2.0 sec 2677 MBytes 2677 MBytes/sec [ 3] 2.0‐ 3.0 sec 2655 MBytes 2655 MBytes/sec [ 3] 3.0‐ 4.0 sec 2672 MBytes 2672 MBytes/sec [ 3] 4.0‐ 5.0 sec 2661 MBytes 2661 MBytes/sec [ 3] 5.0‐ 6.0 sec 2665 MBytes 2665 MBytes/sec [ 3] 6.0‐ 7.0 sec 2662 MBytes 2662 MBytes/sec [ 3] 7.0‐ 8.0 sec 2691 MBytes 2691 MBytes/sec [ 3] 8.0‐ 9.0 sec 2715 MBytes 2715 MBytes/sec [ 3] 9.0‐10.0 sec 2719 MBytes 2719 MBytes/sec [ 3] 0.0‐10.0 sec 26758 MBytes 2676 MBytes/sec iperf ‐s ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ [ 4] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 45360 [ ID] Interval Transfer Bandwidth [ 4] 0.0‐10.0 sec 26.1 GBytes 22.4 Gbits/sec As it is seen both PCs are capable to handle 10 gigabit network cards. TCP test PC1: Client iperf ‐c 192.167.1.2
  5. 10_Gigabit_Ethernet_Test_Report file:10gigabit_test_v1_20121128.doc Page 5 of 7 ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Client connecting to

    192.167.1.2, TCP port 5001 TCP window size: 96.1 KByte (default) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ [ 3] local 192.167.1.1 port 35911 connected with 192.167.1.2 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0‐10.0 sec 11.5 GBytes 9.87 Gbits/sec PC2: Server iperf ‐s ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ [ 4] local 192.167.1.2 port 5001 connected with 192.167.1.1 port 35911 [ ID] Interval Transfer Bandwidth [ 4] 0.0‐10.0 sec 11.5 GBytes 9.87 Gbits/sec UDP test PC1: Client iperf ‐u ‐c 192.167.1.2 ‐w 16M ‐b 10000m ‐l 65000 ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Client connecting to 192.167.1.2, UDP port 5001 Sending 65000 byte datagrams UDP buffer size: 32.0 MByte (WARNING: requested 16.0 MByte) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ [ 3] local 192.167.1.1 port 59447 connected with 192.167.1.2 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0‐10.0 sec 11.6 GBytes 9.92 Gbits/sec [ 3] Sent 190862 datagrams [ 3] Server Report: [ 3] 0.0‐10.0 sec 11.6 GBytes 9.92 Gbits/sec 0.038 ms 1/190861 (0.00052%) [ 3] 0.0‐10.0 sec 1 datagrams received out‐of‐order PC2: Server iperf ‐u ‐s ‐w 16M ‐l 65000 ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Server listening on UDP port 5001 Receiving 65000 byte datagrams UDP buffer size: 32.0 MByte (WARNING: requested 16.0 MByte) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ [ 3] local 192.167.1.2 port 5001 connected with 192.167.1.1 port 59447 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0‐10.0 sec 11.6 GBytes 9.92 Gbits/sec 0.038 ms 1/190861 (0.00052%)
  6. 10_Gigabit_Ethernet_Test_Report file:10gigabit_test_v1_20121128.doc Page 6 of 7 [ 3] 0.0‐10.0 sec

    1 datagrams received out‐of‐order The TCP test was successful without changing default settings of Iperf. In case of UDP connection the bandwidth was around 8 Gbit/s with default settings so it was necessary to increase UDP buffer size to 16 MB. To maximize the throughput of the protocol biggest available packet size is set. With these parameters the desired bandwidth was measured using UDP protocol. 4.2 Windows OS The first test is running server and client on the same PC to test PCI-E bandwidth. iperf ‐s ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Server listening on TCP port 5001 TCP window size: 8.00 KByte (default) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ [256] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 1149 [ ID] Interval Transfer Bandwidth [256] 0.0‐10.0 sec 7.23 GBytes 6.21 Gbits/sec C:\Users\Pfolab\Desktop>iperf ‐c 127.0.0.1 ‐P 1 ‐i 1 ‐p 5001 ‐f M ‐t 10 ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Client connecting to 127.0.0.1, TCP port 5001 TCP window size: 0.01 MByte (default) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ [156] local 127.0.0.1 port 1149 connected with 127.0.0.1 port 5001 [ ID] Interval Transfer Bandwidth [156] 0.0‐ 1.0 sec 726 MBytes 726 MBytes/sec [156] 1.0‐ 2.0 sec 745 MBytes 745 MBytes/sec [156] 2.0‐ 3.0 sec 718 MBytes 718 MBytes/sec [156] 3.0‐ 4.0 sec 743 MBytes 743 MBytes/sec [156] 4.0‐ 5.0 sec 750 MBytes 750 MBytes/sec [156] 5.0‐ 6.0 sec 730 MBytes 730 MBytes/sec [156] 6.0‐ 7.0 sec 747 MBytes 747 MBytes/sec [156] 7.0‐ 8.0 sec 745 MBytes 745 MBytes/sec [156] 8.0‐ 9.0 sec 749 MBytes 749 MBytes/sec [156] 9.0‐10.0 sec 753 MBytes 753 MBytes/sec [156] 0.0‐10.0 sec 7404 MBytes 740 MBytes/sec TCP Test PC1: Server iperf ‐s ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Server listening on TCP port 5001 TCP window size: 8.00 KByte (default) ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ [168] local 192.167.1.1 port 5001 connected with 192.167.1.2 port 1153 [ ID] Interval Transfer Bandwidth [168] 0.0‐10.0 sec 6.70 GBytes 5.76 Gbits/sec
  7. 10_Gigabit_Ethernet_Test_Report file:10gigabit_test_v1_20121128.doc Page 7 of 7 PC2: Client iperf ‐c

    192.167.1.1 ‐P 1 ‐i 1 ‐p 5001 ‐w 1280K ‐f M ‐t 10 ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ Client connecting to 192.167.1.1, TCP port 5001 TCP window size: 1.25 MByte ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ [156] local 192.167.1.2 port 1153 connected with 192.167.1.1 port 5001 [ ID] Interval Transfer Bandwidth [156] 0.0‐ 1.0 sec 686 MBytes 686 MBytes/sec [156] 1.0‐ 2.0 sec 684 MBytes 684 MBytes/sec [156] 2.0‐ 3.0 sec 685 MBytes 685 MBytes/sec [156] 3.0‐ 4.0 sec 685 MBytes 685 MBytes/sec [156] 4.0‐ 5.0 sec 688 MBytes 688 MBytes/sec [156] 5.0‐ 6.0 sec 685 MBytes 685 MBytes/sec [156] 6.0‐ 7.0 sec 684 MBytes 684 MBytes/sec [156] 7.0‐ 8.0 sec 686 MBytes 686 MBytes/sec [156] 8.0‐ 9.0 sec 684 MBytes 684 MBytes/sec [156] 9.0‐10.0 sec 699 MBytes 699 MBytes/sec [156] 0.0‐10.0 sec 6865 MBytes 685 MBytes/sec The UDP test shows similar results to TCP connection. According to the manufacturer of the 10 Gb network cards Microsoft’s desktop operating systems like Windows Xp, 7 and Vista are not ready to serve 10 Gb network devices. Only the server operating systems (Windows Server 2003, 2008, 2008 R2) have the capability to benefit of 10G Ethernet devices. The measurement results of the Windows server OS is planned to be made later. 5 Conclusion The measured data flow is strongly dependent on the hardware and software environment. For any application development 64 bit is strongly suggested as the memory usage can be very high. The 8x PCI-E Bus was capable to serve 10 Gb cards so it will not limit the performance. As the test results show only Linux desktop operating systems are ready to handle 10 Gb networks. A further test should be done with a Microsoft server operating system to recognize the capabilities of it. According to the results the first version of DAQ software is suggested to be developed for 64 bit Linux OS.